技术领域technical field
本发明涉及信号处理技术领域,尤其涉及一种手势识别方法及装置、智能穿戴终端及服务器。The present invention relates to the technical field of signal processing, in particular to a gesture recognition method and device, a smart wearable terminal and a server.
背景技术Background technique
目前,越来越多的智能穿戴设备受到用户的青睐,为了满足用户的需求,智能穿戴设备的功能日益多样化。例如,用户可以在手臂或者手腕上佩戴智能手环,然后通过智能手环进行记步。At present, more and more smart wearable devices are favored by users. In order to meet the needs of users, the functions of smart wearable devices are increasingly diversified. For example, a user can wear a smart bracelet on his arm or wrist, and then count steps through the smart bracelet.
现有技术通过智能穿戴反应用户运动状态的识别,一般采用特征提取的方式来做行为模式的识别,采集穿戴设备上传感器的数据,然后通过特征提取算法对传感器数据的特征进行提取,比如记步功能提出的特征就是上报的数据具有规律性的波峰和波谷。在提取出行走状态的特征后,在用户佩戴智能手环的过程中,如果传感器再次上报具有同类特征的传感器数据时,则认为是用户处于行走状态,此时智能手环就可以进行记步。The existing technology uses smart wear to reflect the recognition of the user's motion state. Generally, feature extraction is used to identify behavior patterns, collect sensor data on wearable devices, and then use feature extraction algorithms to extract features of sensor data, such as step counting. The feature proposed by the function is that the reported data has regular peaks and troughs. After extracting the features of the walking state, when the user wears the smart bracelet, if the sensor reports sensor data with the same characteristics again, it is considered that the user is in the walking state, and the smart bracelet can now record steps.
目前的特征提取方法的精确度较低,无法与用户的个体差异进行适配。Current feature extraction methods have low accuracy and cannot adapt to individual differences of users.
发明内容Contents of the invention
本发明旨在至少在一定程度上解决相关技术中的技术问题之一。The present invention aims to solve one of the technical problems in the related art at least to a certain extent.
为此,本发明的一个目的在于提出一种手势识别方法,该方法可以提高手势识别的准确率,并且根据与用户的个体差异进行适配。Therefore, an object of the present invention is to propose a gesture recognition method, which can improve the accuracy of gesture recognition, and adapt according to individual differences of users.
本发明的另一个目的在于提出一种手势识别装置。Another object of the present invention is to provide a gesture recognition device.
本发明的另一个目的在于提出一种智能穿戴终端。Another object of the present invention is to provide a smart wearable terminal.
本发明的另一个目的在于提出一种服务器。Another object of the present invention is to propose a server.
为达到上述目的,本发明第一方面实施例提出的手势识别方法,包括:In order to achieve the above purpose, the gesture recognition method proposed in the embodiment of the first aspect of the present invention includes:
从传感器中获取当前手势对应的传感器数据;其中,所述传感器数据用于表征手势的变化;Acquiring sensor data corresponding to the current gesture from the sensor; wherein the sensor data is used to characterize changes in the gesture;
将所述传感器数据进行图像转化,得到图像数据;performing image conversion on the sensor data to obtain image data;
将所述图像数据输入到基于神经网络形成的目标识别模型中,通过所述目标识别模型确定当前手势的目标手势类型。The image data is input into a target recognition model formed based on a neural network, and the target gesture type of the current gesture is determined through the target recognition model.
本发明第一方面实施例提出的手势识别方法,通过将传感器数据转换成图像数据,然后通过识别模型利用较为成熟的图像识别技术,对手势的类型进行识别,从而可以提高识别的准确率。The gesture recognition method proposed in the embodiment of the first aspect of the present invention converts sensor data into image data, and then uses a relatively mature image recognition technology through the recognition model to recognize the type of gesture, thereby improving the accuracy of recognition.
为达到上述目的,本发明第二方面实施例提出的手势识别装置,包括:In order to achieve the above purpose, the gesture recognition device proposed in the embodiment of the second aspect of the present invention includes:
获取模块,用于从传感器中获取当前手势对应的传感器数据;其中,所述传感器数据用于表征手势的变化;An acquisition module, configured to acquire sensor data corresponding to the current gesture from the sensor; wherein the sensor data is used to characterize changes in the gesture;
转换模块,用于将所述传感器数据进行图像转换,得到图像数据;A conversion module, configured to perform image conversion on the sensor data to obtain image data;
识别模块,用于将所述图像数据输入到目标识别模型中,通过所述目标识别模型确定当前手势的目标手势类型。A recognition module, configured to input the image data into a target recognition model, and determine the target gesture type of the current gesture through the target recognition model.
本发明第二方面实施例提出的手势识别装置,通过将传感器数据转换成图像数据,然后通过识别模型利用较为成熟的图像识别技术,对手势的类型进行识别,从而可以提高识别的准确率。The gesture recognition device proposed in the embodiment of the second aspect of the present invention converts sensor data into image data, and then recognizes the type of gesture through a recognition model using relatively mature image recognition technology, thereby improving the accuracy of recognition.
为达到上述目的,本发明第三方面实施例提出的智能穿戴终端,包括:In order to achieve the above purpose, the smart wearable terminal proposed in the embodiment of the third aspect of the present invention includes:
如上所述的手势识别装置。A gesture recognition device as described above.
为达到上述目的,本发明第四方面实施例提出的服务器,包括:To achieve the above purpose, the server proposed in the embodiment of the fourth aspect of the present invention includes:
如上所述的手势识别装置。A gesture recognition device as described above.
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
附图说明Description of drawings
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, wherein:
图1为本发明实施例提供的一种手势识别方法的流程示意图;FIG. 1 is a schematic flowchart of a gesture recognition method provided by an embodiment of the present invention;
图2为本发明实施例提供的另一种手势识别方法的流程示意图;FIG. 2 is a schematic flowchart of another gesture recognition method provided by an embodiment of the present invention;
图3为本发明实施例提供的另一种手势识别方法的流程示意图;FIG. 3 is a schematic flowchart of another gesture recognition method provided by an embodiment of the present invention;
图4为本实施例中提供的一种智能穿戴终端的硬件组成示意图;FIG. 4 is a schematic diagram of hardware composition of a smart wearable terminal provided in this embodiment;
图5为本实施例提供的一种服务器的软件模块的组成示意图;FIG. 5 is a schematic diagram of the composition of a software module of a server provided by this embodiment;
图6为本实施例中提供的一种智能穿戴终端的软件模块的组成示意图;FIG. 6 is a schematic diagram of the composition of a software module of a smart wearable terminal provided in this embodiment;
图7为本发明实施例提供的一种手势识别装置的结构示意图;FIG. 7 is a schematic structural diagram of a gesture recognition device provided by an embodiment of the present invention;
图8为本发明实施例提供的另一种手势识别装置的结构示意图;FIG. 8 is a schematic structural diagram of another gesture recognition device provided by an embodiment of the present invention;
图9为本发明实施例提供的一种训练模块15的结构示意图;FIG. 9 is a schematic structural diagram of a training module 15 provided by an embodiment of the present invention;
图10为本发明实施例提供的一种服务器的结构示意图;FIG. 10 is a schematic structural diagram of a server provided by an embodiment of the present invention;
图11为本发明实施例提供的一种智能穿戴终端的结构示意图;FIG. 11 is a schematic structural diagram of a smart wearable terminal provided by an embodiment of the present invention;
图12为本发明实施例提供的一种手势识别系统的结构示意图。Fig. 12 is a schematic structural diagram of a gesture recognition system provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的模块或具有相同或类似功能的模块。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。相反,本发明的实施例包括落入所附加权利要求书的精神和内涵范围内的所有变化、修改和等同物。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals denote the same or similar modules or modules having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention. On the contrary, the embodiments of the present invention include all changes, modifications and equivalents coming within the spirit and scope of the appended claims.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the present invention, "plurality" means two or more, unless otherwise specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the invention includes alternative implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present invention pertain.
图1为本发明实施例提供的一种手势识别方法的流程示意图,如图2所示,该手势识别方法包括以下步骤:Fig. 1 is a schematic flow chart of a gesture recognition method provided by an embodiment of the present invention. As shown in Fig. 2, the gesture recognition method includes the following steps:
S101、从传感器中获取当前手势对应的传感器数据。S101. Obtain sensor data corresponding to a current gesture from a sensor.
其中,所述传感器数据用于表征手势的变化。Wherein, the sensor data is used to characterize changes in gestures.
在智能穿戴上设置有传感器,通过传感器可以采集使用该智能穿戴的用户的一些数据。本实施例中,智能穿戴上可以设置有重力传感器和陀螺仪传感器,其中,重力传感器用于获取用户的三轴加速度,该三轴加速度为立体空间X轴、Y轴以及Z轴上的加速度。陀螺仪传感器用于获取用户的三轴角速度,该三轴角速度为立体空间的三轴角速度。A sensor is provided on the smart wearable, and some data of the user using the smart wearable can be collected through the sensor. In this embodiment, a gravity sensor and a gyroscope sensor may be provided on the smart wearable, wherein the gravity sensor is used to obtain the user's three-axis acceleration, which is the acceleration on the X-axis, Y-axis, and Z-axis of the three-dimensional space. The gyroscope sensor is used to obtain the user's three-axis angular velocity, which is the three-axis angular velocity of the three-dimensional space.
当用户手势发生变化时,智能穿戴上的传感器就可以记录到当前手势对应的传感器数据。当需要对用户的手势进行识别时,可以从传感器中获取到的当前手势的传感器数据。即从重力传感器和陀螺仪传感器中获取传感器数据,该传感器数据包括六轴数据,分别为三轴加速度和三轴角速度。When the user's gesture changes, the sensor on the smart wearable can record the sensor data corresponding to the current gesture. When the user's gesture needs to be recognized, the sensor data of the current gesture can be obtained from the sensor. That is, the sensor data is obtained from the gravity sensor and the gyroscope sensor, and the sensor data includes six-axis data, which are three-axis acceleration and three-axis angular velocity.
S102、将传感器数据进行图像转化,得到图像数据。S102. Perform image conversion on the sensor data to obtain image data.
由于图像识别技术较为成熟,为了提高手势识别的准确率,可以基于传感器数据将当前手势通过图像表征出来,然后基于图像识别技术对手势的类型进行识别。具体地,将传感器数据进行图像转换,以得到该传感器数据对应的图像数据。优选地,可以将传感器数据按照曲线图形式进行图像转换。具体地,在设定的时间段内,以时间为横轴即X轴,将传感器数据作为纵轴即Y轴,形成二维曲线图。本实施例中,传感器数据为6轴数据,则在形成的二维曲线图中包括6条可能轨迹不同的曲线,来表示传感器数据的变化情况。Since image recognition technology is relatively mature, in order to improve the accuracy of gesture recognition, the current gesture can be represented by images based on sensor data, and then the type of gesture can be identified based on image recognition technology. Specifically, image conversion is performed on the sensor data to obtain image data corresponding to the sensor data. Preferably, the sensor data can be image-converted in the form of a graph. Specifically, within a set period of time, a two-dimensional graph is formed with time as the horizontal axis, ie, the X axis, and sensor data as the vertical axis, ie, the Y axis. In this embodiment, the sensor data is 6-axis data, and the formed two-dimensional graph includes 6 curves with different possible trajectories to represent the change of the sensor data.
可选地,可以将传感器数据按照直方图形式进行图像转换。具体地,在设定的时间段内,以时间为横轴即X轴,以传感器数据作为纵轴即Y轴,形成直方图。本实施例中,传感器数据为6轴数据,则在形成的直方图中在同样的时间间隔内包括6条高度可能不等的纵向条纹或线段表示传感器数据的变化情况。Optionally, the sensor data can be transformed into an image in the form of a histogram. Specifically, within a set period of time, a histogram is formed with time as the horizontal axis, that is, the X axis, and sensor data as the vertical axis, that is, the Y axis. In this embodiment, the sensor data is 6-axis data, and the formed histogram includes 6 vertical stripes or line segments with possibly unequal heights in the same time interval to indicate the change of the sensor data.
按照曲线图形式或者直方图形式,将传感器数据转换成二维曲线图或者直方图后,即获取到与该传感器数据对应的图像数据。After the sensor data is converted into a two-dimensional graph or histogram in the form of a graph or a histogram, the image data corresponding to the sensor data is obtained.
S103、将图像数据输入到目标识别模型中,通过目标识别模型确定当前手势的目标手势类型。S103. Input the image data into the target recognition model, and determine the target gesture type of the current gesture through the target recognition model.
在生成了传感器数据对应的图像数据后,就可以将图像数据输入到目标识别模型中进行识别,该目标识别模型可以采用图像识别技术对图像数据进行分析,可以确定当前手势对应的目标手势类型。优选地,目标识别模型可以基于神经网络构建而成,再利用大量的样本数据对模型进行训练后形成的。基本的手势类别包括:上下抖动、左右平移、右上画圈、左上画圈等。After the image data corresponding to the sensor data is generated, the image data can be input into the target recognition model for recognition. The target recognition model can use image recognition technology to analyze the image data and determine the target gesture type corresponding to the current gesture. Preferably, the target recognition model can be constructed based on a neural network, and then formed after training the model with a large amount of sample data. Basic gesture categories include: shaking up and down, panning left and right, drawing a circle on the upper right, drawing a circle on the left, etc.
本实施例提出的手势识别方法,通过从传感器中获取当前手势对应的传感器数据;其中,传感器数据用于表征手势的变化,将传感器数据进行图像转换,得到图像数据,将图像数据输入到目标识别模型中,通过目标识别模型确定当前手势的目标手势类型。本实施例中,将传感器数据转换成图像数据,然后通过目标识别模型利用较为成熟的图像识别技术,对手势的类型进行识别,从而可以提高识别的准确率。The gesture recognition method proposed in this embodiment obtains the sensor data corresponding to the current gesture from the sensor; wherein, the sensor data is used to represent the change of the gesture, and image conversion is performed on the sensor data to obtain image data, and the image data is input into the target recognition In the model, the target gesture type of the current gesture is determined through the target recognition model. In this embodiment, the sensor data is converted into image data, and then the type of the gesture is recognized by using a relatively mature image recognition technology through the target recognition model, so that the recognition accuracy can be improved.
图2为本发明实施例提供的另一种手势识别方法的流程示意图。如图2所示,该手势识别方法包括以下步骤:FIG. 2 is a schematic flowchart of another gesture recognition method provided by an embodiment of the present invention. As shown in Figure 2, the gesture recognition method includes the following steps:
S201、从传感器中采集用于训练的样本传感器数据。S201. Collect sample sensor data for training from a sensor.
本实施例中,预先构建一个识别模型,该识别模型优选地基于神经网络进行构建。为了实现对识别模型的训练,以使该识别模型就有识别能力,需要采集用于对识别模型进行训练的大量的样本。具体地,可以对单项动作定向采集传感器数据作为样本传感器数据,例如每组动作可以连续做1000次,中间以短暂停顿作为动作的间隔,在服务器收到样本传感器数据后,可以根据该停顿间隔作为标识拆分动作。In this embodiment, a recognition model is constructed in advance, and the recognition model is preferably constructed based on a neural network. In order to realize the training of the recognition model so that the recognition model has the recognition ability, it is necessary to collect a large number of samples for training the recognition model. Specifically, sensor data can be directional collected for a single action as sample sensor data. For example, each group of actions can be performed 1,000 times in a row, with a short pause in the middle as the action interval. After the server receives the sample sensor data, it can be used as the pause interval. Identifies the split action.
S202、将样本传感器数据进行图像转换,得到样本图像数据。S202. Perform image conversion on the sample sensor data to obtain sample image data.
在获取到样本传感器数据后,就可以对样本传感器数据进行图像转换,得到样本图像数据。优选地,可以将样本传感器数据按照曲线图形式进行图像转换。具体地,在设定的时间段内,以时间为横轴即X轴,将样本传感器数据作为纵轴即Y轴,形成二维曲线图。After the sample sensor data is acquired, image conversion may be performed on the sample sensor data to obtain sample image data. Preferably, the sample sensor data can be image-converted in the form of a graph. Specifically, within a set period of time, a two-dimensional graph is formed with time as the horizontal axis, ie, the X axis, and the sample sensor data as the vertical axis, ie, the Y axis.
可选地,可以将样本传感器数据按照直方图形式进行图像转换。具体地,在设定的时间段内,以时间为横轴即X轴,以样本传感器数据作为纵轴即Y轴,形成直方图。Optionally, image conversion may be performed on the sample sensor data in the form of a histogram. Specifically, within a set period of time, a histogram is formed with time as the horizontal axis, that is, the X axis, and the sample sensor data as the vertical axis, that is, the Y axis.
按照曲线图形式或者直方图形式,将样本传感器数据转换成二维曲线图或者直方图后,即获取到与该样本传感器数据对应的样本图像数据。After the sample sensor data is converted into a two-dimensional graph or histogram in the form of a graph or a histogram, the sample image data corresponding to the sample sensor data is obtained.
S203、将样本图像数据输入到预设的识别模型中进行训练,以得到目标识别模型。S203. Input the sample image data into a preset recognition model for training to obtain a target recognition model.
在获取到样本图像数据后,将样本图像数据输入到预设的识别模型中进行手势类型识别,进一步地获取识别模型的误识率。具体地,在将样本图像数据输入到预设的识别模型中进行训练之前,用户可以对样本图像数据的类型进行打标签,形成该样本图像数据对应的第一类型标签。通过第一类型标签可以标识出该样本图像数据所对应的真实的手势类型。样本图像数据通过识别模型识别完成后,识别模型可以为样本图像数据生成一个类型标签,形成该样本图像数据的第二类型标签,该第二类型标签标识出识别模型判定出的样本图像数据所对应的手势类型。进一步地,根据样本图像数据的第一类型标签和第二类型标签统计获取该识别模型的误差率。具体地,可以获取到第二类型标签与第一类型标签所对应的手势类型不一致的数量,然后将该数量与样本数量作比值,该比值就是识别模型的误识率。After the sample image data is acquired, the sample image data is input into a preset recognition model for gesture type recognition, and the false recognition rate of the recognition model is further obtained. Specifically, before inputting the sample image data into a preset recognition model for training, the user may label the type of the sample image data to form a first type label corresponding to the sample image data. The real gesture type corresponding to the sample image data can be identified through the first type tag. After the sample image data is recognized by the recognition model, the recognition model can generate a type label for the sample image data to form a second type label of the sample image data, and the second type label identifies the sample image data determined by the recognition model. gesture type. Further, the error rate of the recognition model is obtained statistically according to the first type label and the second type label of the sample image data. Specifically, the number of inconsistent gesture types corresponding to the second type of label and the first type of label can be obtained, and then the number is compared with the number of samples, and the ratio is the false recognition rate of the recognition model.
在获取到误识率之后,将误识率与预设的阈值作比较,如果误识率高于或者等于预设的阈值,则说明识别模型的识别效果较差,出现的误判数据较多,需要对识别模型进行调整,以获取到识别效果好的识别模型。当误识率高于或者等于预设的阈值时,则调整识别模型的参数。具体地,当识别模型基于神经网络构建时,可以对该识别模型中的调整所述识别模型中神经网络的网络层数、学习率以及卷积核等,以使识别模型的训练结果收敛,也就是使识别模型的误识率降到预设的阈值之下。优选地,可以采用是卷积神经网络来构建识别模型,该识别模型主要包含3个卷积层、1个池化层和2个全连接层。After obtaining the false recognition rate, compare the false recognition rate with the preset threshold. If the false recognition rate is higher than or equal to the preset threshold, it means that the recognition effect of the recognition model is poor, and there are more false positive data. , the recognition model needs to be adjusted to obtain a recognition model with a good recognition effect. When the false recognition rate is higher than or equal to the preset threshold, the parameters of the recognition model are adjusted. Specifically, when the recognition model is constructed based on a neural network, the number of layers, learning rate, and convolution kernel of the neural network in the recognition model can be adjusted in order to make the training results of the recognition model converge, and also It is to reduce the misrecognition rate of the recognition model below the preset threshold. Preferably, a convolutional neural network can be used to construct a recognition model, and the recognition model mainly includes 3 convolutional layers, 1 pooling layer and 2 fully connected layers.
在对识别模型的参数进行调整,为了得到最后训练好的目标识别模型,需要继续基于样本图像数据对调整后的识别模型进行训练,直到识别模型的误识率低于预设的阈值。进一步地,将误识率低于阈值时的识别模型确定为目标识别模型。After adjusting the parameters of the recognition model, in order to obtain the final trained target recognition model, it is necessary to continue to train the adjusted recognition model based on the sample image data until the false recognition rate of the recognition model is lower than the preset threshold. Further, the recognition model when the false recognition rate is lower than the threshold is determined as the target recognition model.
本实施例中,由于对用户进行多次采样,得到用户的图像数据,使得该图像数据能够真实地反映出用户的动作形态,从而体现了用户的差异性。In this embodiment, the user's image data is obtained by sampling the user multiple times, so that the image data can truly reflect the user's action form, thereby reflecting the user's difference.
S204、从传感器中获取当前手势对应的传感器数据。S204. Obtain sensor data corresponding to the current gesture from the sensor.
其中,所述传感器数据用于表征手势的变化。Wherein, the sensor data is used to characterize changes in gestures.
S205、将传感器数据进行图像转换,得到图像数据。S205. Perform image conversion on the sensor data to obtain image data.
S206、将图像数据输入到目标识别模型中,通过目标识别模型确定当前手势的目标手势类型。S206. Input the image data into the target recognition model, and determine the target gesture type of the current gesture through the target recognition model.
关于S204~S206中相关内容的介绍,可参见上述实施例中相关内容的记载,此次不再赘述。For the introduction of relevant content in S204-S206, refer to the relevant content in the above-mentioned embodiments, and details will not be repeated this time.
本实施例提出的手势识别方法,通过从传感器中获取当前手势对应的传感器数据;其中,传感器数据用于表征手势的变化,将传感器数据进行图像转换,得到图像数据,将图像数据输入到目标识别模型中,通过目标识别模型确定当前手势的目标手势类型。本实施例中,将传感器数据转换成图像数据,然后通过目标识别模型利用较为成熟的图像识别技术,对手势的类型进行识别,从而可以提高识别的准确率。The gesture recognition method proposed in this embodiment obtains the sensor data corresponding to the current gesture from the sensor; wherein, the sensor data is used to represent the change of the gesture, and image conversion is performed on the sensor data to obtain image data, and the image data is input into the target recognition In the model, the target gesture type of the current gesture is determined through the target recognition model. In this embodiment, the sensor data is converted into image data, and then the type of the gesture is recognized by using a relatively mature image recognition technology through the target recognition model, so that the recognition accuracy can be improved.
进一步地,通过机器学习的方式对识别模型进行训练,得到误识率低于阈值的目标识别模型,通过该目标识别模型对传感器数据对应的图像数据进行识别,更进一步地提高了手势识别的准确率。Furthermore, the recognition model is trained by means of machine learning to obtain a target recognition model with a false recognition rate lower than the threshold, and the target recognition model is used to recognize the image data corresponding to the sensor data, which further improves the accuracy of gesture recognition Rate.
图3为本发明实施例提供的另一种手势识别方法的流程示意图。如图3所示,该手势识别方法的执行主体包括:智能穿戴终端和服务器。该手势识别方法包括以下步骤:FIG. 3 is a schematic flowchart of another gesture recognition method provided by an embodiment of the present invention. As shown in FIG. 3 , the executors of the gesture recognition method include: a smart wearable terminal and a server. The gesture recognition method includes the following steps:
S300、服务器对预设的识别模型进行训练,得到目标识别模型。S300. The server trains a preset recognition model to obtain a target recognition model.
具体地,服务器接收终端发送过来的从传感器中采集用于训练的样本传感器数据,然后将样本传感器数据进行图像转换,得到样本图像数据,将样本图像数据输入到预设的识别模型中进行训练,以得到目标识别模型。具体过程可参见上述实施例中相关内容的记载,此次不再赘述。Specifically, the server receives the sample sensor data collected from the sensor for training sent by the terminal, then performs image conversion on the sample sensor data to obtain sample image data, and inputs the sample image data into a preset recognition model for training, to get the target recognition model. For the specific process, reference may be made to the records of relevant content in the foregoing embodiments, and details will not be repeated this time.
可选地,在服务器获取到该目标识别模型后,可以将该目标识别模型反馈给智能穿戴终端,将该目标识别模型设置在智能穿戴终端上。Optionally, after the server acquires the target recognition model, the target recognition model may be fed back to the smart wearable terminal, and the target recognition model may be set on the smart wearable terminal.
S301、智能穿戴终端获取当前手势的传感器数据。S301. The smart wearable terminal acquires sensor data of a current gesture.
图4为本实施例中提供的一种智能穿戴终端的硬件组成示意图。如图4所示,该智能穿戴终端包括:微控制单元(Microcontroller Unit,简称MCU)、重力传感器(G-sensor)、陀螺仪传感器(A-sensor)、电池(Battery)、蓝牙(Bluetooth)以及存储器(Flash)。其中,G-sensor和A-sensor对当前手势进行数据记录,即形成该当前手势的传感器数据。FIG. 4 is a schematic diagram of hardware composition of a smart wearable terminal provided in this embodiment. As shown in Figure 4, the smart wearable terminal includes: Microcontroller Unit (MCU for short), gravity sensor (G-sensor), gyroscope sensor (A-sensor), battery (Battery), Bluetooth (Bluetooth) and Memory (Flash). Wherein, the G-sensor and the A-sensor record the data of the current gesture, that is, form the sensor data of the current gesture.
S302、智能穿戴终端将传感器数据转换成图像数据。S302. The smart wearable terminal converts the sensor data into image data.
MCU可以对传感器即重力传感器和陀螺仪传感器上的传感器数据进行采集,然后将传感器数据转换成图像,得到图像数据。The MCU can collect the sensor data on the sensor, that is, the gravity sensor and the gyroscope sensor, and then convert the sensor data into an image to obtain image data.
具体过程可参见上述实施例中相关内容的记载,此次不再赘述。For the specific process, reference may be made to the records of relevant content in the foregoing embodiments, and details will not be repeated this time.
S303、智能穿戴终端将图像数据上报到服务器。S303. The smart wearable terminal reports the image data to the server.
智能穿戴设备可以通过蓝牙将图像数据上报给服务器。可选地在智能穿戴终端上设置有WiFi,也通过该WiFi将图像数据上报给服务器。Smart wearable devices can report image data to the server via Bluetooth. Optionally, a WiFi is set on the smart wearable terminal, and the image data is also reported to the server through the WiFi.
进一步地,图4中的智能穿戴终端还可以包括:心率传感器(HR-Sensor),通过该心率传感器监测用户的心率等数据。Further, the smart wearable terminal in FIG. 4 may further include: a heart rate sensor (HR-Sensor), through which data such as the user's heart rate is monitored.
S304、服务器将图像数据输入到目标识别模块中进行识别,确定当前手势的目标手势类型。S304. The server inputs the image data into the target recognition module for recognition, and determines the target gesture type of the current gesture.
本实施例中,将训练后的目标识别模型设置在服务器侧,当需要对当前手势进行识别时,智能穿戴终端上报图像数据,具体地,可以通过智能穿戴终端上的蓝牙或者WiFi将图像数据上报给服务器。服务器对图像数据进行在线识别,识别完成后将识别结果反馈给智能穿戴终端。智能穿戴终端还可以包括显示屏(Screen),可以在显示屏上显示服务器反馈的结果。一般情况下,智能穿戴终端的体积较小,将目标识别模块设置在服务器上可以不再占用智能穿戴终端的空间。In this embodiment, the trained target recognition model is set on the server side. When the current gesture needs to be recognized, the smart wearable terminal reports the image data. Specifically, the image data can be reported through Bluetooth or WiFi on the smart wearable terminal. to the server. The server performs online recognition on the image data, and feeds back the recognition result to the smart wearable terminal after the recognition is completed. The smart wearable terminal may also include a display (Screen), which may display the result fed back by the server on the display. Generally, the volume of the smart wearable terminal is small, and the target recognition module is disposed on the server so that no space of the smart wearable terminal is occupied.
此处需要说明,目标识别模块可以设置在智能穿戴终端上,也可以设置在服务器上。当服务器将识别模型训练完成得到目标识别模块后,可以将该目标识别模型反馈给智能穿戴终端,当目标识别模块在智能穿戴终端侧时,智能穿戴终端就可以在本地对当前手势的传感器数据进行识别,不再需要发送给服务器进行识别,可以减低服务器的压力。It should be noted here that the target recognition module can be set on the smart wearable terminal or on the server. After the server completes the training of the recognition model and obtains the target recognition module, it can feed back the target recognition model to the smart wearable terminal. Identification, it is no longer necessary to send to the server for identification, which can reduce the pressure on the server.
图5为本实施例中提供的一种智能穿戴终端的软件模块的组成示意图。如图5所示,该智能穿戴终端包括传感器驱动、蓝牙驱动、主控程序、WiFi模块、识别模块、LCD驱动以及数据传输,传感器驱动、蓝牙驱动、主控程序、WiFi模块、识别模块以及LCD驱动模块通过数据传输与MCU之间相互通信。其中,主控程序可以用于控制MCU。传感器驱动可以驱动G-sensor、A-sensor和HR-sensor。蓝牙驱动可以驱动蓝牙,使得获取到的传感器数据可以通过蓝牙传输到服务器上,或者智能穿戴终端上还可以包括一个WiFi模块,通过该WiFi模块实现WiFi传输,将传感器数据上报给服务器。进一步地,当识别模块中设置有目标识别模型,该智能穿戴终端通过目标识别模块对手势的类型进行识别。FIG. 5 is a schematic diagram of the composition of software modules of a smart wearable terminal provided in this embodiment. As shown in Figure 5, the smart wearable terminal includes a sensor driver, a Bluetooth driver, a main control program, a WiFi module, an identification module, an LCD driver and data transmission, a sensor driver, a Bluetooth driver, a main control program, a WiFi module, an identification module and an LCD The driver module communicates with the MCU through data transmission. Among them, the main control program can be used to control the MCU. The sensor driver can drive G-sensor, A-sensor and HR-sensor. The Bluetooth driver can drive Bluetooth, so that the acquired sensor data can be transmitted to the server through Bluetooth, or the smart wearable terminal can also include a WiFi module, through which WiFi transmission can be realized, and the sensor data can be reported to the server. Further, when the target recognition model is set in the recognition module, the smart wearable terminal can recognize the type of gesture through the target recognition module.
图6为本实施例提供的一种服务器的软件模块的组成示意图。如图5所示,该服务器包括:乌班图(Ubuntu)操作系统、卷积神经网络框架(Convolution Architecture ForFeature Extraction,简称CAFFE)、样本训练、模型输出和数据传输。其中,卷积神经网络框架用于实现识别模型的构建。通过样本训练来完成识别模型的训练,以得到目标识别模型,该目标识别模型可以通过模型输出反馈给智能穿戴终端。进一步地,服务器还可以包括:通用并行计算架构(Compute Unified Device Architecture,简称CUDA),通过该CUDA使图形处理器(Graphics Processing Unit,简称GPU)能解决复杂的技术问题。FIG. 6 is a schematic diagram of composition of software modules of a server provided in this embodiment. As shown in Figure 5, the server includes: Ubuntu operating system, Convolution Architecture For Feature Extraction (CAFFE for short), sample training, model output and data transmission. Among them, the convolutional neural network framework is used to realize the construction of the recognition model. The training of the recognition model is completed through sample training to obtain a target recognition model, which can be fed back to the smart wearable terminal through the model output. Further, the server may further include: a general parallel computing architecture (Compute Unified Device Architecture, CUDA for short), through which a Graphics Processing Unit (GPU for short) can solve complex technical problems.
本实施例提出的手势识别方法,通过从传感器中获取当前手势对应的传感器数据;其中,传感器数据用于表征手势的变化,将传感器数据进行图像转换,得到图像数据,将图像数据输入到目标识别模型中,通过目标识别模型确定当前手势的目标手势类型。本实施例中,将传感器数据转换成图像数据,然后通过目标识别模型利用较为成熟的图像识别技术,对手势的类型进行识别,从而可以提高识别的准确率。The gesture recognition method proposed in this embodiment obtains the sensor data corresponding to the current gesture from the sensor; wherein, the sensor data is used to represent the change of the gesture, and image conversion is performed on the sensor data to obtain image data, and the image data is input into the target recognition In the model, the target gesture type of the current gesture is determined through the target recognition model. In this embodiment, the sensor data is converted into image data, and then the type of the gesture is recognized by using a relatively mature image recognition technology through the target recognition model, so that the recognition accuracy can be improved.
进一步地,通过机器学习的方式对识别模型进行训练,得到误识率低于阈值的目标识别模型,通过该目标识别模型对传感器数据对应的图像数据进行识别,更进一步地提高了手势识别的准确率。Furthermore, the recognition model is trained by means of machine learning to obtain a target recognition model with a false recognition rate lower than the threshold, and the target recognition model is used to recognize the image data corresponding to the sensor data, which further improves the accuracy of gesture recognition Rate.
图7为本发明实施例提供的一种手势识别装置的结构示意图。如图7所示,该手势识别装置1包括:获取模块11、转换模块12和识别模块13。FIG. 7 is a schematic structural diagram of a gesture recognition device provided by an embodiment of the present invention. As shown in FIG. 7 , the gesture recognition device 1 includes: an acquisition module 11 , a conversion module 12 and a recognition module 13 .
其中,获取模块11,用于从传感器中获取当前手势对应的传感器数据;其中,所述传感器数据用于表征手势的变化。Wherein, the acquisition module 11 is configured to acquire sensor data corresponding to the current gesture from the sensor; wherein, the sensor data is used to characterize changes in the gesture.
在智能穿戴上设置有传感器,获取模块11通过传感器可以采集使用该智能穿戴的用户的一些数据。本实施例中,智能穿戴上可以设置有重力传感器和陀螺仪传感器。当用户手势发生变化时,智能穿戴上的传感器就可以记录到当前手势对应的传感器数据。当需要对用户的手势进行识别时,获取模块11可以从传感器中获取到的当前手势的传感器数据。即从重力传感器和陀螺仪传感器中获取传感器数据,该传感器数据包括六轴数据,分别为三轴加速度和三轴角速度。Sensors are provided on the smart wearable, and the acquisition module 11 can collect some data of the user using the smart wearable through the sensor. In this embodiment, the smart wearable may be provided with a gravity sensor and a gyroscope sensor. When the user's gesture changes, the sensor on the smart wearable can record the sensor data corresponding to the current gesture. When the user's gesture needs to be recognized, the acquiring module 11 can acquire the sensor data of the current gesture from the sensor. That is, the sensor data is obtained from the gravity sensor and the gyroscope sensor, and the sensor data includes six-axis data, which are three-axis acceleration and three-axis angular velocity.
转换模块12,用于将所述传感器数据进行图像转换,得到图像数据。The conversion module 12 is configured to perform image conversion on the sensor data to obtain image data.
转换模块12,具体用于:将所述传感器数据和/或者所述样本传感器数据按照曲线图形式进行图像转换。The conversion module 12 is specifically configured to: perform image conversion on the sensor data and/or the sample sensor data in the form of a graph.
进一步地,转换模块12,具体用于:在设定的时间段内,以时间为横轴,以所述传感器数据和/或者所述样本传感器数据作为纵轴,形成二维曲线图。Further, the conversion module 12 is specifically configured to: within a set period of time, use time as the horizontal axis and the sensor data and/or the sample sensor data as the vertical axis to form a two-dimensional graph.
转换模块12,具体用于:将所述传感器数据和/或者所述样本传感器数据按照直方图形式进行图像转换。The conversion module 12 is specifically configured to: perform image conversion on the sensor data and/or the sample sensor data in the form of a histogram.
进一步地,转换模块12,具体用于:在设定的时间段内,以时间为横轴,以所述传感器数据和/或者所述样本传感器数据作为纵轴形成直方图。Further, the conversion module 12 is specifically configured to: within a set time period, form a histogram with time as the horizontal axis and the sensor data and/or the sample sensor data as the vertical axis.
识别模块13,用于将所述图像数据输入到目标识别模型中,通过所述目标识别模型确定当前手势的目标手势类型。The recognition module 13 is configured to input the image data into a target recognition model, and determine the target gesture type of the current gesture through the target recognition model.
在生成了传感器数据对应的图像数据后,识别模块13就可以将图像数据输入到目标识别模型中进行识别,该目标识别模型可以采用图像识别技术对图像数据进行分析,可以确定当前手势对应的目标手势类型。优选地,目标识别模型可以基于神经网络构建而成,再利用大量的样本数据对模型进行训练后形成的。基本的手势类别包括:上下抖动、左右平移、右上画圈、左上画圈等。After the image data corresponding to the sensor data is generated, the recognition module 13 can input the image data into the target recognition model for recognition. The target recognition model can use image recognition technology to analyze the image data and determine the target corresponding to the current gesture. Gesture type. Preferably, the target recognition model can be constructed based on a neural network, and then formed after training the model with a large amount of sample data. Basic gesture categories include: shaking up and down, panning left and right, drawing a circle on the upper right, drawing a circle on the left, etc.
本实施例提出的手势识别装置,通过从传感器中获取当前手势对应的传感器数据;其中,传感器数据用于表征手势的变化,将传感器数据进行图像转换,得到图像数据,将图像数据输入到目标识别模型中,通过目标识别模型确定当前手势的目标手势类型。本实施例中,将传感器数据转换成图像数据,然后通过目标识别模型利用较为成熟的图像识别技术,对手势的类型进行识别,从而可以提高识别的准确率。The gesture recognition device proposed in this embodiment obtains the sensor data corresponding to the current gesture from the sensor; wherein, the sensor data is used to represent the change of the gesture, and image conversion is performed on the sensor data to obtain image data, and the image data is input into the target recognition In the model, the target gesture type of the current gesture is determined through the target recognition model. In this embodiment, the sensor data is converted into image data, and then the type of the gesture is recognized by using a relatively mature image recognition technology through the target recognition model, so that the recognition accuracy can be improved.
图8为本发明实施例提供的另一种手势识别装置的结构示意图。如图8所示,该手势识别装置2除了包括上述实施例中的获取模块11、转换模块12和识别模块13之外,还包括:采集模块14和训练模块15。FIG. 8 is a schematic structural diagram of another gesture recognition device provided by an embodiment of the present invention. As shown in FIG. 8 , the gesture recognition device 2 includes, in addition to the acquisition module 11 , conversion module 12 and recognition module 13 in the above-mentioned embodiments, a collection module 14 and a training module 15 .
采集模块14,用于从所述传感器中采集用于训练的样本传感器数据。The collection module 14 is configured to collect sample sensor data for training from the sensors.
所述转换模块12,还用于将所述样本传感器数据进行图像转换,得到样本图像数据。The conversion module 12 is further configured to perform image conversion on the sample sensor data to obtain sample image data.
训练模块15,用于将所述样本图像数据输入到预设的识别模型中进行手势类型的识别训练,以得到所述目标识别模型。The training module 15 is configured to input the sample image data into a preset recognition model to perform gesture type recognition training, so as to obtain the target recognition model.
图9为本实施例中一种训练模块15的可选的结构示意图。该训练模块15包括:训练单元151、获取单元152、调整单元153和确定单元154。FIG. 9 is a schematic diagram of an optional structure of a training module 15 in this embodiment. The training module 15 includes: a training unit 151 , an acquisition unit 152 , an adjustment unit 153 and a determination unit 154 .
其中,训练单元151,用于将所述样本图像数据输入到所述识别模型中进行手势类型的识别训练,以及基于所述样本图像数据对调整后的所述识别模型继续手势类型的识别训练直到所述误识率低于所述阈值。Wherein, the training unit 151 is configured to input the sample image data into the recognition model to perform gesture type recognition training, and continue the gesture type recognition training to the adjusted recognition model based on the sample image data until The false recognition rate is lower than the threshold.
获取单元152,用于获取所述识别模型的误识率。The obtaining unit 152 is configured to obtain the false recognition rate of the recognition model.
调整单元153,用于如果所述误识率高于或者等于预设的阈值,则调整所述识别模型的参数。The adjustment unit 153 is configured to adjust the parameters of the recognition model if the false recognition rate is higher than or equal to a preset threshold.
确定单元154,用于将所述误识率低于所述阈值时的所述识别模型确定为所述目标识别模型。A determining unit 154, configured to determine the recognition model when the false recognition rate is lower than the threshold as the target recognition model.
进一步地,当所述识别模型基于神经网络构建时,调整单元153,具体用于:Further, when the recognition model is constructed based on a neural network, the adjustment unit 153 is specifically used for:
调整所述识别模型中神经网络的网络层数、学习率以及卷积核。Adjusting the network layer number, learning rate and convolution kernel of the neural network in the recognition model.
进一步地,获取单元152,具体用于:Further, the obtaining unit 152 is specifically used for:
获取所述样本图像数据对应的第一类型标签。Obtain the first type of label corresponding to the sample image data.
获取所述识别模型识别出的所述样本图像数据对应的第二类型标签。and acquiring a second type of label corresponding to the sample image data identified by the identification model.
根据所述第一类型标签和所述第二类型标签统计所述误差率。The error rate is calculated according to the first type of label and the second type of label.
本实施例提出的手势识别装置,通过从传感器中获取当前手势对应的传感器数据;其中,传感器数据用于表征手势的变化,将传感器数据进行图像转换,得到图像数据,将图像数据输入到目标识别模型中,通过目标识别模型确定当前手势的目标手势类型。本实施例中,将传感器数据转换成图像数据,然后通过目标识别模型利用较为成熟的图像识别技术,对手势的类型进行识别,从而可以提高识别的准确率。The gesture recognition device proposed in this embodiment obtains the sensor data corresponding to the current gesture from the sensor; wherein, the sensor data is used to represent the change of the gesture, and image conversion is performed on the sensor data to obtain image data, and the image data is input into the target recognition In the model, the target gesture type of the current gesture is determined through the target recognition model. In this embodiment, the sensor data is converted into image data, and then the type of the gesture is recognized by using a relatively mature image recognition technology through the target recognition model, so that the recognition accuracy can be improved.
进一步地,通过机器学习的方式对识别模型进行训练,得到误识率低于阈值的目标识别模型,通过该目标识别模型对传感器数据对应的图像数据进行识别,更进一步地提高了手势识别的准确率。Furthermore, the recognition model is trained by means of machine learning to obtain a target recognition model with a false recognition rate lower than the threshold, and the target recognition model is used to recognize the image data corresponding to the sensor data, which further improves the accuracy of gesture recognition Rate.
图10为本发明实施例提供的一种服务器的结构示意图。如图10所示,该服务器3包括:上述实施例中的手势识别装置2。FIG. 10 is a schematic structural diagram of a server provided by an embodiment of the present invention. As shown in FIG. 10 , the server 3 includes: the gesture recognition device 2 in the above-mentioned embodiment.
本实施例中,手势识别装置2中的获取模块11、转换模块12和识别模块13设置在服务器3上,服务器3可以在线通过获取模块11获取当前手势的传感器数据,然后通过转换模块12对传感器数据进行图像转换,得到图像数据,识别模块13再基于目标识别模型对当前手势的传感器数据进行识别,以确定当前手势的目标手势类型。In this embodiment, the acquisition module 11, the conversion module 12 and the recognition module 13 in the gesture recognition device 2 are arranged on the server 3, and the server 3 can acquire the sensor data of the current gesture through the acquisition module 11 online, and then pass the conversion module 12 to the sensor. Image conversion is performed on the data to obtain image data, and the recognition module 13 then recognizes the sensor data of the current gesture based on the target recognition model to determine the target gesture type of the current gesture.
进一步地,手势识别装置2中的采集模块14和训练模块15也设置在服务器3上,通过采集模块14和训练模块15对样本传感器数据进行训练,最终得到目标识别模型。Furthermore, the collection module 14 and the training module 15 in the gesture recognition device 2 are also set on the server 3, and the sample sensor data are trained through the collection module 14 and the training module 15, and finally the target recognition model is obtained.
可选地,服务器3可以只包括手势识别装置2中的采集模块14和训练模块15,即在服务器3中进行识别模型的训练,得到目标识别模型,然后将目标识别模型反馈给智能穿戴终端,通过智能穿戴终端在对当前手势的类型进行识别。Optionally, the server 3 may only include the acquisition module 14 and the training module 15 in the gesture recognition device 2, that is, the recognition model is trained in the server 3 to obtain the target recognition model, and then the target recognition model is fed back to the smart wearable terminal, The type of the current gesture is recognized by the smart wearable terminal.
本实施例中,将训练后的目标识别模型设置在服务器侧,当需要对当前手势进行识别时,智能穿戴终端上报图像数据,具体地,可以通过智能穿戴终端上的蓝牙或者WiFi将图像数据上报给服务器。服务器对图像数据进行在线识别,识别完成后将识别结果反馈给智能穿戴终端。智能穿戴终端还可以包括显示屏(Screen),可以在显示屏上显示服务器反馈的结果。一般情况下,智能穿戴终端的体积较小,将目标识别模块设置在服务器上可以不再占用智能穿戴终端的空间。In this embodiment, the trained target recognition model is set on the server side. When the current gesture needs to be recognized, the smart wearable terminal reports the image data. Specifically, the image data can be reported through Bluetooth or WiFi on the smart wearable terminal. to the server. The server performs online recognition on the image data, and feeds back the recognition result to the smart wearable terminal after the recognition is completed. The smart wearable terminal may also include a display (Screen), which may display the result fed back by the server on the display. Generally, the volume of the smart wearable terminal is small, and the object recognition module is disposed on the server so that no space is occupied by the smart wearable terminal.
进一步地,由于训练的过程需要大量的数据,由于服务器的性能要优于智能穿戴终端,从而可以提高训练的效率。Further, since the training process requires a large amount of data, and since the performance of the server is better than that of the smart wearable terminal, the efficiency of the training can be improved.
图11为本发明实施例提供的一种智能穿戴终端的结构示意图。如图11所示,该智能穿戴终端4包括:手势识别装置1,即将手势识别装置1中的获取模块11、转换模块12和识别模块13设置在智能穿戴终端4上,智能穿戴终端4可以线下通过获取模块11获取当前手势的传感器数据,然后通过转换模块12对传感器数据进行图像转换,得到图像数据,识别模块13再基于目标识别模型对当前手势的传感器数据进行识别,以确定当前手势的目标手势类型。FIG. 11 is a schematic structural diagram of a smart wearable terminal provided by an embodiment of the present invention. As shown in Figure 11, the smart wearable terminal 4 includes: a gesture recognition device 1, that is, the acquisition module 11, the conversion module 12 and the recognition module 13 in the gesture recognition device 1 are arranged on the smart wearable terminal 4, and the smart wearable terminal 4 can be wired Next, acquire the sensor data of the current gesture through the acquisition module 11, then perform image conversion on the sensor data through the conversion module 12 to obtain image data, and the recognition module 13 recognizes the sensor data of the current gesture based on the target recognition model to determine the current gesture. Target gesture type.
其中,识别模块13中的目标识别模型是由服务器基于样本传感器数据对预设的识别模型进行训练好后,由服务器反馈给智能穿戴终端4的。Wherein, the target recognition model in the recognition module 13 is fed back to the smart wearable terminal 4 by the server after the server has trained the preset recognition model based on the sample sensor data.
本实施例中,当目标识别模块在智能穿戴终端侧时,智能穿戴终端就可以在本地对当前手势的传感器数据进行识别,不再需要发送给服务器进行识别,可以减低服务器的压力。In this embodiment, when the target recognition module is on the side of the smart wearable terminal, the smart wearable terminal can locally recognize the sensor data of the current gesture, and no longer needs to send it to the server for recognition, which can reduce the pressure on the server.
图12为本发明实施例提供的一种手势识别系统的结构示意图。如图12所示,该手势识别系统包括:智能穿戴终端5和服务器6。Fig. 12 is a schematic structural diagram of a gesture recognition system provided by an embodiment of the present invention. As shown in FIG. 12 , the gesture recognition system includes: a smart wearable terminal 5 and a server 6 .
其中,该智能穿戴终端5包括获取模块11、转换模块12和识别模块13。进一步地,服务器6包括:采集模块14和训练模块15。由获取模块11、转换模块12、识别模块13、采集模块14和训练模块15组成一个手势识别装置2。Wherein, the smart wearable terminal 5 includes an acquisition module 11 , a conversion module 12 and an identification module 13 . Further, the server 6 includes: a collection module 14 and a training module 15 . A gesture recognition device 2 is composed of an acquisition module 11 , a conversion module 12 , a recognition module 13 , a collection module 14 and a training module 15 .
本实施例中,当目标识别模块在智能穿戴终端侧时,智能穿戴终端就可以在本地对当前手势的传感器数据进行识别,不再需要发送给服务器进行识别,可以减低服务器的压力。进一步地,由于训练的过程需要大量的数据,由于服务器的性能要优于智能穿戴终端,从而可以提高训练的效率。In this embodiment, when the target recognition module is on the side of the smart wearable terminal, the smart wearable terminal can locally recognize the sensor data of the current gesture, and no longer needs to send it to the server for recognition, which can reduce the pressure on the server. Further, since the training process requires a large amount of data, and since the performance of the server is better than that of the smart wearable terminal, the efficiency of the training can be improved.
需要说明的是,在本发明的描述中,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,在本发明的描述中,除非另有说明,“多个”的含义是两个或两个以上。It should be noted that, in the description of the present invention, the terms "first", "second" and so on are only used for description purposes, and should not be understood as indicating or implying relative importance. In addition, in the description of the present invention, unless otherwise specified, "plurality" means two or more.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the invention includes alternative implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present invention pertain.
应当理解,本发明的各部分模块或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that the various modules of the present invention or their combination can be realized. In the embodiments described above, various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium. During execution, one or a combination of the steps of the method embodiments is included.
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are realized in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。The storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "example", "specific examples", or "some examples" mean that specific features described in connection with the embodiment or example , structure, material or characteristic is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。Although the embodiments of the present invention have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limiting the present invention, those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611041278.9ACN108089693A (en) | 2016-11-22 | 2016-11-22 | Gesture identification method and device, intelligence wearing terminal and server |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201611041278.9ACN108089693A (en) | 2016-11-22 | 2016-11-22 | Gesture identification method and device, intelligence wearing terminal and server |
| Publication Number | Publication Date |
|---|---|
| CN108089693Atrue CN108089693A (en) | 2018-05-29 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201611041278.9APendingCN108089693A (en) | 2016-11-22 | 2016-11-22 | Gesture identification method and device, intelligence wearing terminal and server |
| Country | Link |
|---|---|
| CN (1) | CN108089693A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109241925A (en)* | 2018-09-18 | 2019-01-18 | 深圳市格莱科技有限公司 | A kind of smart pen |
| CN109858380A (en)* | 2019-01-04 | 2019-06-07 | 广州大学 | Expansible gesture identification method, device, system, gesture identification terminal and medium |
| CN111695408A (en)* | 2020-04-23 | 2020-09-22 | 西安电子科技大学 | Intelligent gesture information recognition system and method and information data processing terminal |
| CN112383804A (en)* | 2020-11-13 | 2021-02-19 | 四川长虹电器股份有限公司 | Gesture recognition method based on empty mouse track |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102024151A (en)* | 2010-12-02 | 2011-04-20 | 中国科学院计算技术研究所 | Training method of gesture motion recognition model and gesture motion recognition method |
| CN102402680A (en)* | 2010-09-13 | 2012-04-04 | 株式会社理光 | Hand and indication point positioning method and gesture confirming method in man-machine interactive system |
| CN104656878A (en)* | 2013-11-19 | 2015-05-27 | 华为技术有限公司 | Method, device and system for recognizing gesture |
| US20150346833A1 (en)* | 2014-06-03 | 2015-12-03 | Beijing TransBorder Information Technology Co., Ltd. | Gesture recognition system and gesture recognition method |
| CN105654037A (en)* | 2015-12-21 | 2016-06-08 | 浙江大学 | Myoelectric signal gesture recognition method based on depth learning and feature images |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102402680A (en)* | 2010-09-13 | 2012-04-04 | 株式会社理光 | Hand and indication point positioning method and gesture confirming method in man-machine interactive system |
| CN102024151A (en)* | 2010-12-02 | 2011-04-20 | 中国科学院计算技术研究所 | Training method of gesture motion recognition model and gesture motion recognition method |
| CN104656878A (en)* | 2013-11-19 | 2015-05-27 | 华为技术有限公司 | Method, device and system for recognizing gesture |
| US20150346833A1 (en)* | 2014-06-03 | 2015-12-03 | Beijing TransBorder Information Technology Co., Ltd. | Gesture recognition system and gesture recognition method |
| CN105654037A (en)* | 2015-12-21 | 2016-06-08 | 浙江大学 | Myoelectric signal gesture recognition method based on depth learning and feature images |
| Title |
|---|
| 徐立波: ""基于MEMS惯性传感器的手势模式识别"", 《中国优秀硕士学位论文全文数据库 信息科技辑》* |
| 王伟栋: ""基于微惯性技术的数据手套研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109241925A (en)* | 2018-09-18 | 2019-01-18 | 深圳市格莱科技有限公司 | A kind of smart pen |
| CN109241925B (en)* | 2018-09-18 | 2025-02-18 | 武汉易知鸟科技有限公司 | A smart pen |
| CN109858380A (en)* | 2019-01-04 | 2019-06-07 | 广州大学 | Expansible gesture identification method, device, system, gesture identification terminal and medium |
| CN111695408A (en)* | 2020-04-23 | 2020-09-22 | 西安电子科技大学 | Intelligent gesture information recognition system and method and information data processing terminal |
| CN112383804A (en)* | 2020-11-13 | 2021-02-19 | 四川长虹电器股份有限公司 | Gesture recognition method based on empty mouse track |
| Publication | Publication Date | Title |
|---|---|---|
| CN110781765B (en) | A human body posture recognition method, device, equipment and storage medium | |
| CN103679203B (en) | Robot system and method for detecting human face and recognizing emotion | |
| CN103970271B (en) | The daily routines recognition methods of fusional movement and physiology sensing data | |
| CN104888444B (en) | Wisdom gloves, the method and system of a kind of calorie of consumption and hand positions identification | |
| CN108089693A (en) | Gesture identification method and device, intelligence wearing terminal and server | |
| CN101976330B (en) | Gesture recognition method and system | |
| CN108062170A (en) | Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal | |
| CN104007822A (en) | Large database based motion recognition method and device | |
| CN106155312B (en) | Gesture recognition and control method and device | |
| CN105051755A (en) | Part and state detection for gesture recognition | |
| JP2018026131A (en) | Motion analyzer | |
| CN109685037A (en) | A kind of real-time action recognition methods, device and electronic equipment | |
| CN111288986B (en) | Motion recognition method and motion recognition device | |
| CN104517100B (en) | Gesture pre-judging method and system | |
| CN103977539A (en) | Cervical vertebra rehabilitation and health care training aiding system | |
| CN104408760A (en) | Binocular-vision-based high-precision virtual assembling system algorithm | |
| CN107992193A (en) | Gesture confirmation method, device and electronic equipment | |
| CN103020885A (en) | Depth image compression | |
| CN104463125A (en) | DSP-based automatic face detecting and tracking device and method | |
| CN106123911A (en) | A kind of based on acceleration sensor with the step recording method of angular-rate sensor | |
| US20240057946A1 (en) | Sarcopenia evaluation method, sarcopenia evaluation device, and non-transitory computer-readable recording medium in which sarcopenia evaluation program is recorded | |
| CN108836342A (en) | It is a kind of based on inertial sensor without feature human motion identification method | |
| CN106778565A (en) | Pull-up counting method and device | |
| CN115346273A (en) | Snow sports information monitoring method and related device | |
| CN117523659A (en) | Skeleton-based multi-feature multi-stream real-time action recognition method, device and medium |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20180529 | |
| WD01 | Invention patent application deemed withdrawn after publication |