Movatterモバイル変換


[0]ホーム

URL:


CN111368762A - Robot gesture recognition method based on improved K-means clustering algorithm - Google Patents

Robot gesture recognition method based on improved K-means clustering algorithm
Download PDF

Info

Publication number
CN111368762A
CN111368762ACN202010157400.9ACN202010157400ACN111368762ACN 111368762 ACN111368762 ACN 111368762ACN 202010157400 ACN202010157400 ACN 202010157400ACN 111368762 ACN111368762 ACN 111368762A
Authority
CN
China
Prior art keywords
robot
sample
improved
category
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010157400.9A
Other languages
Chinese (zh)
Inventor
杨忠
宋爱国
徐宝国
吴有龙
唐玉娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinling Institute of Technology
Original Assignee
Jinling Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinling Institute of TechnologyfiledCriticalJinling Institute of Technology
Priority to CN202010157400.9ApriorityCriticalpatent/CN111368762A/en
Publication of CN111368762ApublicationCriticalpatent/CN111368762A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

基于改进的K‑means聚类算法的机器人手势识别方法。步骤1,利用嵌有微纳光纤传感器的手套采集手部运动的数据,其中传感器采集数据维度为6维;步骤2,通过手套上的WIFI模块将微纳光纤传感器采集到的数据上传给机器人;步骤3,机器人结合改进的K‑means聚类算法预先确定不同手势对应的聚类中心;步骤4,计算当前微纳传感器采集的数据至预先确定的不同手势的聚类中心的欧氏距离;步骤5,将所计算的各欧氏距离与相应类别的阈值进行比较,如果欧式距离比类别阈值低,则判定为该类,否则,重新训练模型;步骤6,机器人根据判断的结果完成相应的动作,至此,一个完整的闭环结束。本发明通过改进的K‑means聚类算法有效的实现了机器人对各类手势精确的识别。

Figure 202010157400

Robot gesture recognition method based on improved K-means clustering algorithm. Step 1, use the glove embedded with the micro-nano optical fiber sensor to collect the data of hand movement, wherein the dimension of the data collected by the sensor is 6 dimensions; step 2, upload the data collected by the micro-nano optical fiber sensor to the robot through the WIFI module on the glove; Step 3, the robot predetermines the cluster centers corresponding to different gestures in combination with the improved K-means clustering algorithm; Step 4, calculates the Euclidean distance from the data collected by the current micro-nano sensor to the predetermined cluster centers of different gestures; Step 5. Compare the calculated Euclidean distance with the threshold of the corresponding category. If the Euclidean distance is lower than the category threshold, it is determined as this category, otherwise, the model is retrained; Step 6, the robot completes the corresponding action according to the judgment result , at this point, a complete closed loop ends. The invention effectively realizes the accurate recognition of various gestures by the robot through the improved K-means clustering algorithm.

Figure 202010157400

Description

Translated fromChinese
基于改进的K-means聚类算法的机器人手势识别方法Robot gesture recognition method based on improved K-means clustering algorithm

技术领域technical field

本发明涉及机器人手势识别领域,特别是涉及基于改进的K-means聚类算法的机器人手势识别方法。The invention relates to the field of robot gesture recognition, in particular to a robot gesture recognition method based on an improved K-means clustering algorithm.

背景技术Background technique

随着人工智能和虚拟现实技术的不断发展,人机交互系统已经成为当前的研究热点。现今,作为一种新兴的人机交互方式,手势识别得到了很多研究者的重视,并产生了一系列有效的成果,且在诸如智能机器人、智能驾驶等设备中得到了广泛的应用。手势识别,简单的来说就是让机器在视觉或传感器采集系统的辅助下来理解人类所想要表达的思想,即通过无接触的方式完成交互过程,从而通过机器人完成相应的动作,在真正意义上实现智能化。With the continuous development of artificial intelligence and virtual reality technology, human-computer interaction system has become a current research hotspot. Nowadays, as an emerging human-computer interaction method, gesture recognition has attracted the attention of many researchers, and has produced a series of effective results, and has been widely used in devices such as intelligent robots and intelligent driving. Gesture recognition, in simple terms, is to let the machine understand the thoughts that humans want to express with the help of the vision or sensor acquisition system, that is, to complete the interaction process in a non-contact way, so as to complete the corresponding actions through the robot, in the true sense Realize intelligence.

针对机器人手势识别的问题,国内涉及该问题解决方案的专利有“一种基于深度视觉的协作机器人手势识别方法及装置”(201910176271.5),预先获取手势模板集合,同时获取待识别手势的若干张深度图;针对每一个手势模板,获取所述待识别手势与所述手势模板的距离,并将与所述待识别手势之间距离最小的手势模板作为所述待识别手势的识别结果,进而根据对应于识别结果的控制参数进行协作机器人的控制。国家发明专利“一种基于智能机器人的手势识别方法”(201910118356.8),通过调用机器人自带的摄像头获取手势图像,并建立手势模板;通过基于肤色检测和基于最大类间方差对手势进行分割;使用中值滤波算法对分割后的手势图像进行去噪,并提取手势边缘轮廓;而后基于手势模板、手势边缘轮廓,采用欧式距离的模板匹配法,得到识别结果。以上两个发明专利都是对手势图片进行识别,过大维度的图片数据,一方面增大了模型的训练难度,另一方面在实际应用时,也增加了模型判别的时间。Aiming at the problem of robot gesture recognition, the domestic patent related to the solution to this problem is "A method and device for collaborative robot gesture recognition based on depth vision" (201910176271.5), which pre-obtains a set of gesture templates and simultaneously acquires several depths of the gesture to be recognized. Figure; For each gesture template, obtain the distance between the gesture to be recognized and the gesture template, and use the gesture template with the smallest distance between the gesture to be recognized as the recognition result of the gesture to be recognized, and then according to the corresponding Control the collaborative robot based on the control parameters of the recognition result. The national invention patent "A Gesture Recognition Method Based on Intelligent Robot" (201910118356.8), obtains the gesture image by calling the camera that comes with the robot, and establishes the gesture template; the gesture is segmented based on skin color detection and maximum inter-class variance; using The median filter algorithm denoises the segmented gesture image and extracts the gesture edge contour; then, based on the gesture template and the gesture edge contour, the Euclidean distance template matching method is used to obtain the recognition result. The above two invention patents both recognize gesture pictures. Too large-dimensional picture data increases the difficulty of model training on the one hand, and increases the time for model discrimination in practical applications.

发明内容SUMMARY OF THE INVENTION

为解决上述问题,本发明在微纳光纤传感器、K-means聚类算法的基础上,提出了基于改进的K-means聚类算法的机器人手势识别方法。首先利用微纳光纤传感器采集不用手势下对应的数据;而后利用改进的K-means聚类算法依次确定不同手势对应的聚类中心和类别阈值;同时该模型支持在线的更新优化,大大的提高了模型的泛化性;最后成功应用于实际,实现了机器人对不同手势的精准识别。为达此目的,本发明提供基于改进的K-means聚类算法的机器人手势识别方法,具体步骤如下,其特征在于:In order to solve the above problems, the present invention proposes a robot gesture recognition method based on the improved K-means clustering algorithm on the basis of the micro-nano optical fiber sensor and the K-means clustering algorithm. First, the micro-nano optical fiber sensor is used to collect the corresponding data without gestures; then the improved K-means clustering algorithm is used to sequentially determine the cluster centers and category thresholds corresponding to different gestures; at the same time, the model supports online update optimization, which greatly improves the The generalization of the model; finally, it was successfully applied in practice, realizing the precise recognition of different gestures by the robot. To achieve this purpose, the present invention provides a robot gesture recognition method based on an improved K-means clustering algorithm, the specific steps are as follows, and it is characterized in that:

步骤1,利用嵌有微纳光纤传感器的手套采集手部运动的数据,其中传感器采集数据维度为6维;Step 1, using a glove embedded with a micro-nano fiber optic sensor to collect hand motion data, wherein the dimension of the data collected by the sensor is 6 dimensions;

步骤2,通过手套上的WIFI模块将微纳光纤传感器采集到的数据上传给机器人;Step 2, upload the data collected by the micro-nano optical fiber sensor to the robot through the WIFI module on the glove;

步骤3,机器人结合改进的K-means聚类算法预先确定不同手势对应的聚类中心;Step 3, the robot pre-determines the cluster centers corresponding to different gestures in combination with the improved K-means clustering algorithm;

步骤4,计算当前微纳传感器采集的数据至预先确定的不同手势的聚类中心的欧氏距离;Step 4, calculating the Euclidean distance from the data collected by the current micro-nano sensor to the predetermined cluster centers of different gestures;

步骤5,将所计算的各欧氏距离与相应类别的阈值进行比较,如果欧式距离比类别阈值低,则判定为该类,否则,重新训练模型;Step 5, compare the calculated Euclidean distance with the threshold of the corresponding category, if the Euclidean distance is lower than the category threshold, it is determined as this category, otherwise, the model is retrained;

步骤6,机器人根据判断的结果完成相应的动作,至此,一个完整的闭环结束。Step 6, the robot completes the corresponding action according to the judgment result, so far, a complete closed loop ends.

进一步,步骤3中利用改进的K-means聚类算法预先确定不同手势对应的聚类中心的具体步骤为:Further, in step 3, the specific steps of using the improved K-means clustering algorithm to predetermine the cluster centers corresponding to different gestures are:

步骤3.1,在所有的样本点中任意选择一个样本点作为第一个类别的初始聚类中心c1Step 3.1, arbitrarily select a sample point from all the sample points as the initial cluster center c1 of the first category;

步骤3.2,对于整个训练样本集X={xj|j=1,2,...,n},计算每一个样本x至聚类中心的距离,将最大距离对应的样本的所在位置作为新的聚类中心;Step 3.2, for the entire training sample set X={xj |j=1,2,...,n}, calculate the distance from each sample x to the cluster center, and use the location of the sample corresponding to the maximum distance as the new the cluster center;

步骤3.3,重复步骤3.2,直至确定k个聚类中心ci(1≤i≤k);Step 3.3, repeat step 3.2 until k cluster centers ci (1≤i≤k) are determined;

步骤3.4,对整个训练样本集X={xj|j=1,2,...,n},分别计算各样本点xj到步骤3.3所确定的k个聚类中心ci(1≤i≤k)的欧式距离,对于s维的样本来说,样本xj到第i类聚类中心ci的欧氏距离为:Step 3.4, for the entire training sample set X={xj |j=1,2,...,n}, respectively calculate each sample point xj to the k cluster centers ci determined in step 3.3 (1≤ The Euclidean distance of i≤k), for the s-dimensional sample, the Euclidean distance from the sample xj to the i-th cluster center ci is:

Figure BDA0002404558320000021
Figure BDA0002404558320000021

步骤3.5,将样本分类到最近的欧式距离所在的类别中,遍历整个样本空间后完成对k个类簇的构建;Step 3.5, classify the samples into the category of the nearest Euclidean distance, and complete the construction of k clusters after traversing the entire sample space;

步骤3.6,对每个类簇,以该簇内所有样本点的均值向量作为新的类簇中心,即类簇的更新准则为:Step 3.6, for each cluster, the mean vector of all sample points in the cluster is used as the new cluster center, that is, the update criterion of the cluster is:

Figure BDA0002404558320000022
Figure BDA0002404558320000022

式中,ci为更新后的各类簇的中心,mi表示第i个类簇中样本的总数,

Figure BDA0002404558320000023
表示类簇中所有样本向量的各维度之和。In the formula, ci is the updated center of each cluster, mi is the total number of samples in the i-th cluster,
Figure BDA0002404558320000023
Represents the sum of the dimensions of all sample vectors in the cluster.

步骤3.7,重复步骤3.4-步骤3.6,直至平方误差函数收敛或迭代次数达到设定的次数,其中平方误差函数表达式为:Step 3.7, repeat steps 3.4-3.6 until the squared error function converges or the number of iterations reaches the set number of times, where the squared error function expression is:

Figure BDA0002404558320000024
Figure BDA0002404558320000024

进一步,步骤5中若实时采集的数据至各聚类中心的欧氏距离比任意的类别阈值大,则重新训练模型的具体描述为:Further, in step 5, if the Euclidean distance from the real-time collected data to each cluster center is larger than an arbitrary category threshold, the specific description of the retraining model is:

通过先验知识给实时采集的样本制作标签,然后将该数据带入到已训练完成的模型中对模型进行更新修正:更新各类别的聚类中心和相应的类别阈值。此处模型支持更新优化,大大的提高了模型的泛化性。Label the samples collected in real time through prior knowledge, and then bring the data into the trained model to update and correct the model: update the cluster centers of each category and the corresponding category thresholds. The model here supports update optimization, which greatly improves the generalization of the model.

本发明基于改进的K-means聚类算法的机器人手势识别方法,有益效果:本发明的技术效果在于:The present invention is based on the improved K-means clustering algorithm robot gesture recognition method, beneficial effects: the technical effect of the present invention is:

1.本发明利用六维微纳光纤传感器实时采集当前手势下的数据,较传统的通过图片形式进行手势识别的数据采集方法有着更低的样本维度,缩短了模型的训练时间,提高了模型的判别速度,同时也保证了很高的精度;1. The present invention uses a six-dimensional micro-nano optical fiber sensor to collect data under the current gesture in real time, which has a lower sample dimension than the traditional data collection method of gesture recognition through the form of pictures, shortens the training time of the model, and improves the performance of the model. Discriminating speed, but also ensuring high accuracy;

2.本发明利用改进的K-means聚类算法对不同的手势进行聚类分析,较传统的K-means聚类方法在避免出现错误的初始聚类中心方面有着更好的改善,同时可以精确的实现对不同手势的分类以及类别阈值的确定;2. The present invention utilizes the improved K-means clustering algorithm to perform cluster analysis on different gestures, which is better than the traditional K-means clustering method in avoiding wrong initial clustering centers, and can accurately The realization of the classification of different gestures and the determination of the category threshold;

3.本发明的机器人手势识别模型支持优化更新,即:当已训练好的模型无法对实时采集的数据进行分类判别,则将此时的数据作为训练数据代入至模型中进行重新训练,从而实现对各手势类别对应的类别中心和类别阈值的更新,因此大大的提高了模型的泛化性。3. The robot gesture recognition model of the present invention supports optimization and updating, that is: when the trained model cannot classify and discriminate the data collected in real time, the data at this time is substituted into the model as training data for retraining, thereby realizing The update of the category center and category threshold corresponding to each gesture category greatly improves the generalization of the model.

附图说明Description of drawings

图1为本发明的流程图;Fig. 1 is the flow chart of the present invention;

图2为本发明中利用改进的K-means聚类算法识别不同手势的聚类中心和与类别阈值的示意图;Fig. 2 utilizes improved K-means clustering algorithm in the present invention to identify the cluster centers of different gestures and the schematic diagram of category threshold;

具体实施方式Detailed ways

下面结合附图与具体实施方式对本发明作进一步详细描述:The present invention will be described in further detail below in conjunction with the accompanying drawings and specific embodiments:

本发明提出了基于改进的K-means聚类算法的机器人手势识别方法,旨在简单高效的实现机器人对不同手势的精确识别。The invention proposes a robot gesture recognition method based on an improved K-means clustering algorithm, aiming to realize the precise recognition of different gestures by a robot simply and efficiently.

图1为本发明的流程图。下面结合流程图对本发明的步骤作详细介绍。FIG. 1 is a flow chart of the present invention. The steps of the present invention will be described in detail below with reference to the flow chart.

步骤1,利用嵌有微纳光纤传感器的手套采集手部运动的数据,其中传感器采集数据维度为6维;Step 1, using a glove embedded with a micro-nano fiber optic sensor to collect hand motion data, wherein the dimension of the data collected by the sensor is 6 dimensions;

步骤2,通过手套上的WIFI模块将微纳光纤传感器采集到的数据上传给机器人;Step 2, upload the data collected by the micro-nano optical fiber sensor to the robot through the WIFI module on the glove;

步骤3,机器人结合改进的K-means聚类算法预先确定不同手势对应的聚类中心;Step 3, the robot pre-determines the cluster centers corresponding to different gestures in combination with the improved K-means clustering algorithm;

步骤3.1,在所有的样本点中任意选择一个样本点作为第一个类别的初始聚类中心c1Step 3.1, arbitrarily select a sample point from all the sample points as the initial cluster center c1 of the first category;

步骤3.2,对于整个训练样本集X={xj|j=1,2,...,n},计算每一个样本x至聚类中心的距离,将最大距离对应的样本的所在位置作为新的聚类中心;Step 3.2, for the entire training sample set X={xj |j=1,2,...,n}, calculate the distance from each sample x to the cluster center, and use the location of the sample corresponding to the maximum distance as the new the cluster center;

步骤3.3,重复步骤3.2,直至确定k个聚类中心ci(1≤i≤k);Step 3.3, repeat step 3.2 until k cluster centers ci (1≤i≤k) are determined;

步骤3.4,对整个训练样本集X={xj|j=1,2,...,n},分别计算各样本点xj到步骤3.3所确定的k个聚类中心ci(1≤i≤k)的欧式距离,对于s维的样本来说,样本xj到第i类聚类中心ci的欧氏距离为:Step 3.4, for the entire training sample set X={xj |j=1,2,...,n}, respectively calculate each sample point xj to the k cluster centers ci determined in step 3.3 (1≤ The Euclidean distance of i≤k), for the s-dimensional sample, the Euclidean distance from the sample xj to the i-th cluster center ci is:

Figure BDA0002404558320000041
Figure BDA0002404558320000041

步骤3.5,将样本分类到最近的欧式距离所在的类别中,遍历整个样本空间后完成对k个类簇的构建;Step 3.5, classify the samples into the category of the nearest Euclidean distance, and complete the construction of k clusters after traversing the entire sample space;

步骤3.6,对每个类簇,以该簇内所有样本点的均值向量作为新的类簇中心,即类簇的更新准则为:Step 3.6, for each cluster, the mean vector of all sample points in the cluster is used as the new cluster center, that is, the update criterion of the cluster is:

Figure BDA0002404558320000042
Figure BDA0002404558320000042

式中,ci为更新后的各类簇的中心,mi表示第i个类簇中样本的总数,

Figure BDA0002404558320000043
表示类簇中所有样本向量的各维度之和。In the formula, ci is the updated center of each cluster, mi is the total number of samples in the i-th cluster,
Figure BDA0002404558320000043
Represents the sum of the dimensions of all sample vectors in the cluster.

步骤3.7,重复步骤3.4-步骤3.6,直至平方误差函数收敛或迭代次数达到设定的次数,其中平方误差函数表达式为:Step 3.7, repeat steps 3.4-3.6 until the squared error function converges or the number of iterations reaches the set number of times, where the squared error function expression is:

Figure BDA0002404558320000044
Figure BDA0002404558320000044

步骤4,计算当前微纳传感器采集的数据至预先确定的不同手势的聚类中心的欧氏距离;Step 4, calculating the Euclidean distance from the data collected by the current micro-nano sensor to the predetermined cluster centers of different gestures;

步骤5,将所计算的各欧氏距离与相应类别的阈值进行比较,如果欧式距离比类别阈值低,则判定为该类,否则,重新训练模型。其中重新训练模型的具体描述为:通过先验知识给实时采集的样本制作标签,然后将该数据带入到已训练完成的模型中对模型进行更新修正:更新各类别的聚类中心和相应的类别阈值。此处模型支持更新优化,大大的提高了模型的泛化性。Step 5: Compare the calculated Euclidean distances with the thresholds of the corresponding categories. If the Euclidean distances are lower than the category thresholds, the category is determined. Otherwise, the model is retrained. The specific description of the retraining model is: label the samples collected in real time through prior knowledge, and then bring the data into the trained model to update and correct the model: update the cluster centers of each category and the corresponding Category threshold. The model here supports update optimization, which greatly improves the generalization of the model.

步骤6,机器人根据判断的结果完成相应的动作,至此,一个完整的闭环结束。Step 6, the robot completes the corresponding action according to the judgment result, so far, a complete closed loop ends.

图2为本发明中利用改进的K-means聚类算法识别不同类别的聚类中心和与类别阈值的示意图。从图示可以看出,利用改进的K-means聚类算法可以简单有效的确定不同类别的聚类中心和边界,进而得出相应的类别阈值。FIG. 2 is a schematic diagram of identifying different types of cluster centers and thresholds by using the improved K-means clustering algorithm in the present invention. It can be seen from the figure that the improved K-means clustering algorithm can simply and effectively determine the cluster centers and boundaries of different categories, and then obtain the corresponding category thresholds.

以上所述,仅是本发明的较佳实施例而已,并非是对本发明作任何其他形式的限制,而依据本发明的技术实质所作的任何修改或等同变化,仍属于本发明所要求保护的范围。The above are only preferred embodiments of the present invention, and are not intended to limit the present invention in any other form, and any modifications or equivalent changes made according to the technical essence of the present invention still fall within the scope of protection of the present invention. .

Claims (3)

1. The robot gesture recognition method based on the improved K-means clustering algorithm comprises the following specific steps:
step 1, collecting data of hand movement by using a glove embedded with a micro-nano optical fiber sensor, wherein the dimension of the data collected by the sensor is 6 dimensions;
step 2, uploading data acquired by the micro-nano optical fiber sensor to a robot through a WIFI module on the glove;
step 3, the robot determines clustering centers corresponding to different gestures in advance by combining an improved K-means clustering algorithm;
step 4, calculating Euclidean distances from data acquired by the current micro-nano sensor to predetermined clustering centers of different gestures;
step 5, comparing each calculated Euclidean distance with a threshold value of a corresponding category, if the Euclidean distance is lower than the threshold value of the category, judging the Euclidean distance as the category, and if not, retraining the model;
and 6, completing corresponding actions by the robot according to the judgment result, and finishing a complete closed loop.
2. The robot gesture recognition method based on the improved K-means clustering algorithm of claim 1, characterized in that: the specific steps of utilizing the improved K-means clustering algorithm to predetermine the clustering centers corresponding to different gestures in the step 3 are as follows:
step 3.1, arbitrarily selecting one sample point from all sample points as the initial clustering center c of the first category1
Step 3.2, X ═ X for the entire training sample setjCalculating the distance from each sample x to a clustering center, and taking the position of the sample corresponding to the maximum distance as a new clustering center;
step 3.3, repeating step 3.2 until k clustering centers c are determinedi(1≤i≤k);
Step 3.4, for the whole training sample set X ═ XjI j 1,2,.. n, and calculating each sample point x respectivelyjGo to step 3.3 to determine k cluster centers ci(1 ≦ i ≦ k), sample x for the s-dimensional samplejTo class i centre ciThe Euclidean distance of (A) is:
Figure FDA0002404558310000011
step 3.5, classifying the samples into the class where the nearest Euclidean distance is located, and traversing the whole sample space to complete the construction of k class clusters;
step 3.6, for each class cluster, taking the mean vector of all sample points in the cluster as a new class cluster center, namely the update criterion of the class cluster is as follows:
Figure FDA0002404558310000012
in the formula, ciTo center the updated cluster, miIndicates the total number of samples in the ith class cluster,
Figure FDA0002404558310000013
representing the sum of the dimensions of all sample vectors in the class cluster.
Step 3.7, repeating the steps 3.4-3.6 until the square error function converges or the iteration number reaches the set number, wherein the expression of the square error function is as follows:
Figure FDA0002404558310000021
3. the robot gesture recognition method based on the improved K-means clustering algorithm of claim 1, characterized in that: in step 5, if the euclidean distance from the data acquired in real time to each cluster center is greater than any category threshold, the specific description of the retraining model is as follows:
labeling a sample acquired in real time through prior knowledge, and then bringing the data into a trained model to update and correct the model: and updating the clustering centers of all the categories and the corresponding category threshold values. The model supports updating optimization, and the generalization of the model is greatly improved.
CN202010157400.9A2020-03-092020-03-09 Robot gesture recognition method based on improved K-means clustering algorithmPendingCN111368762A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010157400.9ACN111368762A (en)2020-03-092020-03-09 Robot gesture recognition method based on improved K-means clustering algorithm

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010157400.9ACN111368762A (en)2020-03-092020-03-09 Robot gesture recognition method based on improved K-means clustering algorithm

Publications (1)

Publication NumberPublication Date
CN111368762Atrue CN111368762A (en)2020-07-03

Family

ID=71206751

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010157400.9APendingCN111368762A (en)2020-03-092020-03-09 Robot gesture recognition method based on improved K-means clustering algorithm

Country Status (1)

CountryLink
CN (1)CN111368762A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112016621A (en)*2020-08-282020-12-01上海第一财经数据科技有限公司Training method of classification model, color classification method and electronic equipment
CN112035663A (en)*2020-08-282020-12-04京东数字科技控股股份有限公司Cluster analysis method, device, equipment and storage medium
CN112446296A (en)*2020-10-302021-03-05杭州易现先进科技有限公司Gesture recognition method and device, electronic device and storage medium
CN113516063A (en)*2021-06-292021-10-19北京精密机电控制设备研究所Motion mode identification method based on K-Means and gait cycle similarity
CN113608074A (en)*2021-06-172021-11-05国网浙江省电力有限公司营销服务中心Automatic online monitoring method and system for multi-epitope voltage withstand test device

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105571045A (en)*2014-10-102016-05-11青岛海尔空调电子有限公司Somatosensory identification method, apparatus and air conditioner controller
CN105956604A (en)*2016-04-202016-09-21广东顺德中山大学卡内基梅隆大学国际联合研究院Action identification method based on two layers of space-time neighborhood characteristics
CN106845348A (en)*2016-12-202017-06-13南京信息工程大学A kind of gesture identification method based on arm surface electromyographic signal
CN107014411A (en)*2017-04-052017-08-04浙江大学A kind of flexible micro-nano fiber angle sensor chip and sensor and preparation method
CN108983973A (en)*2018-07-032018-12-11东南大学A kind of humanoid dexterous myoelectric prosthetic hand control method based on gesture identification
CN109032337A (en)*2018-06-282018-12-18济南大学A kind of KEM Gesture Recognition Algorithm based on data glove
CN109547136A (en)*2019-01-282019-03-29北京邮电大学Distributed collaborative frequency spectrum sensing method based on minimax apart from sub-clustering
CN110147754A (en)*2019-05-172019-08-20金陵科技学院A kind of dynamic gesture identification method based on VR technology
CN110348323A (en)*2019-06-192019-10-18广东工业大学A kind of wearable device gesture identification method based on Neural Network Optimization

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105571045A (en)*2014-10-102016-05-11青岛海尔空调电子有限公司Somatosensory identification method, apparatus and air conditioner controller
CN105956604A (en)*2016-04-202016-09-21广东顺德中山大学卡内基梅隆大学国际联合研究院Action identification method based on two layers of space-time neighborhood characteristics
CN106845348A (en)*2016-12-202017-06-13南京信息工程大学A kind of gesture identification method based on arm surface electromyographic signal
CN107014411A (en)*2017-04-052017-08-04浙江大学A kind of flexible micro-nano fiber angle sensor chip and sensor and preparation method
CN109032337A (en)*2018-06-282018-12-18济南大学A kind of KEM Gesture Recognition Algorithm based on data glove
CN108983973A (en)*2018-07-032018-12-11东南大学A kind of humanoid dexterous myoelectric prosthetic hand control method based on gesture identification
CN109547136A (en)*2019-01-282019-03-29北京邮电大学Distributed collaborative frequency spectrum sensing method based on minimax apart from sub-clustering
CN110147754A (en)*2019-05-172019-08-20金陵科技学院A kind of dynamic gesture identification method based on VR technology
CN110348323A (en)*2019-06-192019-10-18广东工业大学A kind of wearable device gesture identification method based on Neural Network Optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王道东: "基于人机交互系统的手势识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》*

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112016621A (en)*2020-08-282020-12-01上海第一财经数据科技有限公司Training method of classification model, color classification method and electronic equipment
CN112035663A (en)*2020-08-282020-12-04京东数字科技控股股份有限公司Cluster analysis method, device, equipment and storage medium
CN112016621B (en)*2020-08-282023-11-24上海应帆数字科技有限公司Training method of classification model, color classification method and electronic equipment
CN112035663B (en)*2020-08-282024-05-17京东科技控股股份有限公司Cluster analysis method, device, equipment and storage medium
CN112446296A (en)*2020-10-302021-03-05杭州易现先进科技有限公司Gesture recognition method and device, electronic device and storage medium
CN112446296B (en)*2020-10-302024-10-22杭州易现先进科技有限公司Gesture recognition method and device, electronic device and storage medium
CN113608074A (en)*2021-06-172021-11-05国网浙江省电力有限公司营销服务中心Automatic online monitoring method and system for multi-epitope voltage withstand test device
CN113608074B (en)*2021-06-172024-05-28国网浙江省电力有限公司营销服务中心Automatic online monitoring method and system for multi-epitope withstand voltage test device
CN113516063A (en)*2021-06-292021-10-19北京精密机电控制设备研究所Motion mode identification method based on K-Means and gait cycle similarity

Similar Documents

PublicationPublication DateTitle
CN111368762A (en) Robot gesture recognition method based on improved K-means clustering algorithm
CN109858406B (en)Key frame extraction method based on joint point information
CN104463100B (en)Intelligent wheel chair man-machine interactive system and method based on human facial expression recognition pattern
CN110728694B (en)Long-time visual target tracking method based on continuous learning
CN103268495B (en)Human body behavior modeling recognition methods based on priori knowledge cluster in computer system
CN105740823B (en)Dynamic gesture track recognizing method based on depth convolutional neural networks
CN106407958B (en)Face feature detection method based on double-layer cascade
CN108921107A (en)Pedestrian's recognition methods again based on sequence loss and Siamese network
Kaâniche et al.Recognizing gestures by learning local motion signatures of HOG descriptors
CN109522853A (en)Face datection and searching method towards monitor video
CN109543615B (en) A dual-learning model target tracking method based on multi-level features
CN109753897B (en)Behavior recognition method based on memory cell reinforcement-time sequence dynamic learning
CN112084898B (en) Assembly operation action recognition method based on static and dynamic separation
CN106778501A (en)Video human face ONLINE RECOGNITION method based on compression tracking with IHDR incremental learnings
CN112883922B (en)Sign language identification method based on CNN-BiGRU neural network fusion
CN112818175A (en)Factory worker searching method and training method of worker recognition model
CN109087337B (en)Long-time target tracking method and system based on hierarchical convolution characteristics
CN112381047B (en)Enhanced recognition method for facial expression image
CN105976397B (en)A kind of method for tracking target
CN113129336A (en)End-to-end multi-vehicle tracking method, system and computer readable medium
CN108229401A (en)A kind of multi-modal Modulation recognition method based on AFSA-SVM
CN110880010A (en)Visual SLAM closed loop detection algorithm based on convolutional neural network
CN108898623A (en)Method for tracking target and equipment
CN101799875A (en)Target detection method
CN108830222A (en)A kind of micro- expression recognition method based on informedness and representative Active Learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20200703

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp