Movatterモバイル変換


[0]ホーム

URL:


CN109325408A - A gesture judgment method and storage medium - Google Patents

A gesture judgment method and storage medium
Download PDF

Info

Publication number
CN109325408A
CN109325408ACN201810921965.2ACN201810921965ACN109325408ACN 109325408 ACN109325408 ACN 109325408ACN 201810921965 ACN201810921965 ACN 201810921965ACN 109325408 ACN109325408 ACN 109325408A
Authority
CN
China
Prior art keywords
information
database
gesture
face image
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810921965.2A
Other languages
Chinese (zh)
Inventor
林昌
吕天德
林力婷
陈庆堂
林金兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Putian University
Original Assignee
Putian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Putian UniversityfiledCriticalPutian University
Priority to CN201810921965.2ApriorityCriticalpatent/CN109325408A/en
Publication of CN109325408ApublicationCriticalpatent/CN109325408A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种手势判断方法以及存储介质,所述方法包括以下步骤:接收摄像设备采集的图像,并识别图像上所有的人体骨架;所述人体骨架包括手部;判断识别出的人体骨架信息是否与数据库中存储的人体骨架信息相匹配;若是,则跟踪相匹配的人体骨架的手部,并根据该手部的位置变化确定对应的手势信息;根据预设的手势信息和操作指令的对应关系,执行该手势信息对应的操作指令。这样可滤掉人体骨架以外的周围环境的信息,降低了周围环境对手势识别的干扰,还可以将采集到的诸多的人体骨架信息中未在数据库内储存有的人体骨架滤除,并仅保留和跟踪数据库内储存有的人体骨架及该人体骨架的手部,本发明具有稳定性高、鲁棒性高的优点。

The invention discloses a gesture judgment method and a storage medium. The method includes the following steps: receiving an image collected by a camera device, and identifying all human skeletons on the image; the human skeleton includes a hand; and judging the identified human skeleton Whether the information matches the human skeleton information stored in the database; if so, track the hand of the matching human skeleton, and determine the corresponding gesture information according to the position change of the hand; Corresponding relationship, execute the operation instruction corresponding to the gesture information. In this way, the information of the surrounding environment other than the human skeleton can be filtered out, the interference of the surrounding environment on gesture recognition can be reduced, and the human skeletons that are not stored in the database can be filtered out of the collected human skeleton information, and only the human skeletons that are not stored in the database can be filtered out. And tracking the human skeleton stored in the database and the hand of the human skeleton, the invention has the advantages of high stability and high robustness.

Description

A kind of gesture judging method and storage medium
Technical field
The present invention relates to gesture identification method field more particularly to a kind of gesture judging methods and storage medium.
Background technique
With the continuous development of computer technology, the interactive mode between man-machine is maked rapid progress, and people utilize naturally sideFormula completes the interaction with the sound, gesture, limb action of intercomputer, and gesture identification is a kind of important side of human-computer interactionFormula.Gesture identification can be interacted more with computer naturally in the method for contactless operation.And it is related to fieldIt is very extensive, such as image procossing, pattern-recognition, computer vision, Industry Control, intellectual analysis, intelligent control field.HandGesture identification has very intuitive, nature relative to other identifications, is easy to numerous advantages such as study, in conjunction with the depth of current prevalenceIt practises, it can be achieved that intelligentized man-machine interaction experience.
In traditional gesture identification, due to only simply being identified to gesture, ambient enviroment is not filtered, becauseThis ambient enviroment once becomes complicated, and gesture identification will become unreliable, and robustness is low.
Summary of the invention
For this reason, it may be necessary to a kind of gesture judging method and storage medium be provided, to solve gesture identification side in the prior artMethod low problem of robustness under complex environment.
To achieve the above object, a kind of gesture judging method is inventor provided, comprising the following steps:
The image of picture pick-up device acquisition is received, and identifies all people's body skeleton on image;The human skeleton includes handPortion;
Judge whether the human skeleton information identified matches with the human skeleton information stored in database;
If so, the hand for the human skeleton that tracking matches, and corresponding hand is determined according to the change in location of the handGesture information;
According to the corresponding relationship of preset gesture information and operational order, the corresponding operational order of the gesture information is executed.
Further, before before judging equipment with the presence or absence of the human skeleton of database storage, further includes:
The image of picture pick-up device acquisition is received, and identifies the facial image of all people on image;
Judgement identifies the facial image for whether being stored in database and matching with collected facial image;
If so, determining the face according to the corresponding relationship of the facial image and human skeleton information that store in databaseThe corresponding human skeleton information of image.
Further, judgement identifies the face figure for whether being stored in database and matching with collected facial imagePicture, specifically includes the following steps:
Calculate the similar value of the facial image of collected facial image and database storage;
Judge whether the similar value of the facial image of collected facial image and database storage is greater than 50%.
Further, corresponding gesture information is determined according to the change in location of the hand, specifically includes the following steps:
Judge whether the change in location of the hand streaks starting point;
If so, the key point for marking the change in location of the hand to streak;
Judge whether the change in location of the hand streaks end point;
If so, into next step.
Further, according to the corresponding relationship of preset gesture information and operational order, it is corresponding to execute the gesture informationOperational order, specifically includes the following steps:
Parse the be in graphical information for the key point streaked;
According to the corresponding relationship of preset be in graphical information and operational order, executes the corresponding operation of the gesture information and refer toIt enables.
Inventor additionally provides a kind of storage medium, and the storage medium is stored with computer program, the computer journeyIt is performed the steps of when sequence is executed by processor
The image of picture pick-up device acquisition is received, and identifies all people's body skeleton on image;The human skeleton includes handPortion;
Judge whether the human skeleton information identified matches with the human skeleton information stored in database;
If so, the hand for the human skeleton that tracking matches, and corresponding hand is determined according to the change in location of the handGesture information;
According to the corresponding relationship of preset gesture information and operational order, the corresponding operational order of the gesture information is executed.
Further, before before judging equipment with the presence or absence of the human skeleton of database storage, the computer program quiltProcessor realizes following steps when executing;
The image of picture pick-up device acquisition is received, and identifies the facial image of all people on image;
Judgement identifies the facial image for whether being stored in database and matching with collected facial image;
If so, determining the face according to the corresponding relationship of the facial image and human skeleton information that store in databaseThe corresponding human skeleton information of image.
Further, judgement identifies the face figure for whether being stored in database and matching with collected facial imagePicture, the computer program perform the steps of when being executed by processor
Calculate the similar value of the facial image of collected facial image and database storage;
Judge whether the similar value of the facial image of collected facial image and database storage is greater than 50%.
Further, determine that corresponding gesture information, the computer program are processed according to the change in location of the handDevice performs the steps of when executing
Judge whether the change in location of the hand streaks starting point;
If so, the key point for marking the change in location of the hand to streak;
Judge whether the change in location of the hand streaks end point;
If so, into next step.
Further, according to the corresponding relationship of preset gesture information and operational order, it is corresponding to execute the gesture informationOperational order, the computer program perform the steps of when being executed by processor
Parse the be in graphical information for the key point streaked;
According to the corresponding relationship of preset be in graphical information and operational order, executes the corresponding operation of the gesture information and refer toIt enables.
It is different from the prior art, computer formed by gesture judging method described in above-mentioned technical proposal and execution this methodThe storage medium of program, the described method comprises the following steps: receiving the image of picture pick-up device acquisition, and identifies all on imageHuman skeleton;The human skeleton includes hand;Judge the human skeleton information that identifies whether with the people that is stored in databaseBody framework information matches;If so, the hand for the human skeleton that tracking matches, and determined according to the change in location of the handCorresponding gesture information;According to the corresponding relationship of preset gesture information and operational order, the corresponding behaviour of the gesture information is executedIt instructs.Such method and the storage medium for executing method, before gesture identification, the identification of advanced row human skeleton canTo filter out the information of the ambient enviroment other than human skeleton, interference of the ambient enviroment to gesture identification is greatly reduced;It will adoptThe pairing of human skeleton information in the human skeleton information and date library collected, can also be by collected many human skeletonsThe human skeleton not stored in database in information filters out, and retains the human skeleton stored in database, this methodThe human skeleton that can then store in identification database, accurately correctly tracking has the human skeleton of operating rightHand determines corresponding gesture information finally by the change in location of the hand of human body skeleton, and identification or permission are knownThe stability and robustness of this method and storage medium are not improved in conjunction with gesture identification.
Detailed description of the invention
Fig. 1 is the flow chart for the gesture judging method that one embodiment of the invention is related to;
Fig. 2 is the haar feature structure figure that one embodiment of the invention is related to;
Fig. 3 is the cascade process figure for the YM strong classifier that one embodiment of the invention is related to;
Fig. 4 is the AdaBoost cascade process figure that one embodiment of the invention is related to;
Fig. 5 is the neuronal structure figure that one embodiment of the invention is related to;
Fig. 6 is the neural network structure figure with hidden layer that one embodiment of the invention is related to;
Fig. 7 is the full Connection Neural Network figure that one embodiment of the invention is related to;
Fig. 8 is the Local Connection Neural Network figure that one embodiment of the invention is related to;
Fig. 9 is single convolution kernel figure that one embodiment of the invention is related to;
Figure 10 is more convolution kernel figures that one embodiment of the invention is related to;
Figure 11 is the convolutional neural networks pond procedure chart that one embodiment of the invention is related to;
Figure 12 is the gesture identification figure that one embodiment of the invention is related to;
Figure 13 is the gesture matching result figure that one embodiment of the invention is related to.
Specific embodiment
Technology contents, construction feature, the objects and the effects for detailed description technical solution, below in conjunction with specific realityIt applies example and attached drawing is cooperated to be explained in detail.
Fig. 1 to Figure 13 is please referred to, the present invention provides a kind of gesture judging method and storage mediums.Referring to Fig. 1, havingIn the embodiment of body, it the described method comprises the following steps:
It enters step S104 and receives the image of picture pick-up device acquisition, and identify all people's body skeleton on image;The peopleBody skeleton includes hand;
Subsequently into the human skeleton information that identifies of step S105 judgement whether with the human skeleton that is stored in databaseInformation matches;
If so, the hand of the human skeleton to match subsequently into step S106 tracking, and according to the position of the handVariation determines corresponding gesture information;
Step S107 is finally entered according to the corresponding relationship of preset gesture information and operational order, executes the gesture informationCorresponding operational order.
In the methods described above, the picture pick-up device can be RGB-D video camera somatosensory device, which contains infraredDepth camera, a RGB camera, one group of microphone array have real-time imaging transmission, voice transfer, multi-person interactive etc.Numerous functions.This equipment can achieve the effect that somatosensory recognition and equipment control by human body, get rid of previous handleControl and mouse manipulation make operator just can reach the effect of " controlling every sky " without contacting the equipment such as PC.RGB-D video camera is logicalIt crosses the color data that intermediate colored camera lens obtains in reality and infrared signal is emitted, due to red by a pair of of infrared ray lens groupOutside line can generate reflection to the object touched, received by infrared receiver camera lens, counted by inside to infrared signalIt calculates, is transformed into depth data.
Picture pick-up device can be used to analyze the data information of RGB-D video camera reading, and be split to scene, exportHuman skeleton information.Therefore, the process for acquiring image can be with are as follows: first using Asus-Xtion depth camera (RGB-D) intoSecondly row image data acquiring is further synthesized using the image that computer vision open source library Opencv comes out acquisition, is obtainedImage three-dimensional information, then the human skeleton information of extraction operation person.Ambient enviroment can be filtered out in this way, only extract peopleTherefore body framework information can reject inhuman information under conditions of ambient enviroment complexity.
Position where RGB-D video camera as cartesian coordinate system origin, it is specified that equipment coordinate system are as follows: with equipmentPlane parallel plane in place is x-z-plane, and horizontal direction is x-axis, and depth direction is z-axis, and vertical direction is y-axis.Therefore, it takes the photographAs equipment can collect coordinate P=(x, y, z) a little, and its calculation formula is:
Wherein Dx, Dy, Rx, Ry are constant, and i is the value of x-axis, and j is the value of y-axis, and d depth value is respectively: Dx=321, Dy=241, Rx=Ry=0.00173667, the value are that resolution ratio is, value when 640 × 480.Such picture pick-up device can be withThe coordinate of human skeleton, the especially coordinate in the joint of the hand of human skeleton are recorded, therefore, can be identified by recordThe coordinate of human skeleton, and the coordinate of the human skeleton stored in the coordinate and database is matched, then it may determine that knowledgeNot Chu human skeleton information whether match with the human skeleton information stored in database;And it can be by obtaining human bodyThe changes in coordinates of the hand joint of skeleton determines corresponding gesture information, realizes the filtering to non-hand and inhuman interference.
It in the particular embodiment, is UserTracker by the usertracking device interface that NiTE is provided.It provides accessMost of algorithm of NiTE.The object provides scene cut, skeleton, plane monitoring-network and attitude detection.Usertracking device algorithmFirst purpose is to look for all active users in special scenes.It tracks the people of each discovery respectively, and provides himThe mode that separates of profile and background.Once scene is divided, usertracking device also be used to start bone tracking and postureDetection algorithm.Each user can provide an ID when being detected.As long as user retains in the frame, User ID is kept notBecome.If user leaves the visual field of picture pick-up device, the tracking of the user will be lost, then may had when detecting the user againThere is different ID.It, can be with quick obtaining image using UserTracker.readFrame function by creating UserTrackerIn human skeleton information, information includes the unique ID number and the important joint coordinates of human body of skeleton: head, neck, left hand the palm, the right handThe palm, left shoulder, right shoulder, left finesse, right finesse, trunk, left foot toe, right crus of diaphragm toe, left knee, right knee.By obtaining ID users,By using the function startSkeletonTracking of UserTracker choose whether to skeleton corresponding to User ID intoLine trace.
Before carrying out gesture identification, the human skeleton in image that the above method passes through identification acquisition, and judgement figureAs the interior human skeleton for whether having user (i.e. databases contain information people), and track the hand of human body skeletonPortion has achieved the purpose that improve stability and robustness.It (is not deposited in database to more accurately filter non-userContain the people of information), referring to Fig. 1, in a further embodiment, with the presence or absence of the people of database storage before judging equipmentBefore body skeleton, further includes:
It initially enters step S101 and receives the image of picture pick-up device acquisition, and identify the face figure of all people on imagePicture;
It identifies whether to be stored in database subsequently into step S102 judgement and match with collected facial imageFacial image;
If so, corresponding with human skeleton information according to the facial image stored in database subsequently into step S103Relationship determines the corresponding human skeleton information of the facial image.
The facial image of all people can take haar feature and Adaboost to carry out cascade and be formed by force on identification imageThe method of classifier carrys out the face key point of the image of positioning acquisition.Haar feature is largely divided into three classes: linear character, center are specialSign, edge feature and diagonal line feature are combined into feature templates.Two different rectangles inside feature templates are white respectivelyAnd black, the characteristic value by pre-defining template are the pixel that the pixel value of rectangle where white subtracts the rectangle where blackValue and.The characteristic value key reaction of Haar the grey scale change situation of image, main feature structure are as shown in Figure 2.
Haar Characteristic Number calculation formula:
W is the width of picture, and H is the height of picture, and w is the width of rectangular characteristic, and h is the height of rectangular characteristic,Indicate rectangular characteristic in the horizontal direction with the maximum ratio coefficient that can amplify of vertical direction.Single HaarThe information that feature is included is considerably less, so cascading multiple Haar features by using Adaboost algorithm.Adaboost algorithm allows designer by using the continuous method that new " Weak Classifier " is added, to make some scheduled pointClass device possesses relatively small error rate.In Adaboost algorithm, a different power is owned by the sample of each trainingWeight, for indicating that it is selected into the probability into training set by the classifier of some component, if some sample point is correctIt is categorized into corresponding classification point, then under construction in a training set.Selected probability will be lowered;Opposite, ifSome sample point is not classified correctly, next time selected probability will the previous probability of no-load voltage ratio come high, strong classifierCascade process it is as shown in Figure 3.
Classifier YM is combined by numerous a weak typings, is voted by last m Weak Classifier to determineClassification results, and the right of speech α of each Weak Classifier is different.AdaBoost algorithm realizes detailed process is as follows instituteShow:
It (1) is that wherein N is sample number by the weights initialisation of whole training examples
(2) for m=1,2,3 ... M:
A) Weak Classifier YM is trained, so that the error function such as formula (3) of its minimum weight
B) the h right of speech α of the Weak Classifier is next calculated:
C) weight is updated:
Wherein Zm:
(3) classifier to the end is obtained:
It can be seen that previous classifier changes weight w, while last classifier is formed, if a training examples existBy accidentally point in previous classifier, then the weight that will lead to it aggravates, the sample weight correctly classified accordingly will be becauseThis is reduced.
Finally the weight Weak Classifiers classified are subjected to cascade more and form its formula of strong classifier:
In order to improve Face datection identification speed and precision, the classifier finally obtained also needs using multiple strong pointsClass device is cascaded, and in cascade sort system, every input picture sequence is passed through each strong classifier, the strong classification of frontDevice is relatively easy, so the Weak Classifier that it includes is also relatively fewer, and subsequent strong classifier becomes increasingly complex step by step, onlyIt is detected by the strong classification and Detection of front just to enter subsequent strong classifier after correct picture, it is earlierWhat classifier can filter out incongruent image of the overwhelming majority, only pass through the picture region of all strong classifier detectionsDomain just effective human face region at last, as shown in Figure 4.
Face can be identified by convolutional neural networks, judgement identifies in database whether be stored with and acquireTo the facial image that matches of facial image, filter out the people of non-user (the not stored people for having information i.e. in database)Body skeleton further guarantees the correctness of gesture identification, and numerous neurons, which passes through to combine, forms neural network, nerve netEach neural unit of network is as shown in Figure 5.
Corresponding formula are as follows:
X is vector, and W is weight corresponding to vector x, and b is constant.The unit is also referred to as Logistic regression model.It is moreA unit combines, and when being formed with mode layered, just becomes neural network model.
Fig. 6 is to show the neural network structure with hidden layer, can neuron in launch plan 6 according to formula (9)Corresponding formula are as follows:
Compare similar, a hidden layer that 2,3,4,5,6 can be extended to ....The training method of neural network andLogistic is approximate, but because of its multilayer, it is also necessary to it manipulates chain type Rule for derivation and derivation is carried out to the node of hidden layer, thisIt is exactly backpropagation.
CNN can reduce number of parameters, by local sensing open country, people to extraneous understanding, be first from regional area again toGlobal area, but connection between the space of image is but also the pixel connection of regional area is more close, and distance is farther outRegion, the correlation between pixel is then relatively weak.Each neuron does not need to perceive the information of global image, only needsThe information in localized region is wanted to be perceived, then integrating local message in the network of higher just can obtain entirelyThe information of office.The connection thought of network portion is also inspired by vision system structure in biology in neural network, as Fig. 7 is completeShown in Connection Neural Network figure, and as shown in Fig. 8 Local Connection Neural Network figure.
In Fig. 7, if each neuron is only connected with each other with 10 × 10 pixels, weighted data number is 1000000Data can be reduced to script one thousandth by × 100 parameters.And that 10 × 10 pixel values, corresponding 10 × 10 ginsengsNumber, is equivalent to carry out convolution operation.But in this case, still make parameter excessive, so using the second way, i.e. using weightsIt is shared.
If parameter above only has 100, only 1 100 × 100 convolution kernel is indicated, hence it is evident that, feature extraction isIt is insufficient, can be by the multiple convolution kernels of addition, such as using 32 convolution kernels, 32 kinds of different features can be learnt.HavingWhen multiple convolution kernels, as shown in Figure 9 and Figure 10.
In Fig. 9, a color image is split into three figures, the figure of different color channels according to tri- channels R, G, BAs corresponding different convolution kernel.Each convolution kernel can synthesize image another piece image.
It is exactly that aggregate statistics are carried out to the characteristic point of different location, these should statistical nature to describe big imageNot only with low dimensional (all extracting obtained features compared to using), but also result can be improved, it is not easy to poor fitting orOver-fitting.The operation of this polymerization is known as pond (pooling), and pond process is as shown in figure 11, finally by full articulamentum intoRow propagated forward matches corresponding label.
In a further embodiment, judgement identifies whether be stored in database and collected facial image phaseThe facial image matched, specifically includes the following steps:
Calculate the similar value of the facial image of collected facial image and database storage;
Judge whether the similar value of the facial image of collected facial image and database storage is greater than 50%.
In a further embodiment, corresponding gesture information is determined according to the change in location of the hand, specifically include withLower step:
Judge whether the change in location of the hand streaks starting point;
If so, the key point for marking the change in location of the hand to streak;
Judge whether the change in location of the hand streaks end point;
If so, into next step.
In this above method, gesture identification is identified using key point, by marking multiple key points in space,While hand streaks key point, mark the point, after the completion of acting, parse to multiple key points, and according to default handGesture judges that operator thinks that the idea of expression, such benefit done are precision height, and relative to traditional DTW, (Dynamic Programming is calculatedMethod) for algorithm it is relatively easy, do not need complicated calculating, and combination free, training sample do not needed, relative to static handGesture identification, can be combined into miscellaneous gesture motion.It allows the operator to rapidly adapt to this system in a short time.Key pointAs shown in figure 12, the point in figure is preset key point, and the color of key point can be preset au bleu, utilize hand for descriptionAfter key point, key point can be marked as red point from blue dot in portion, by judging the color of key point, to judge gesture,Finally obtain gesture motion.
In a further embodiment, according to the corresponding relationship of preset gesture information and operational order, the gesture is executedThe corresponding operational order of information, specifically includes the following steps:
Parse the be in graphical information for the key point streaked;
According to the corresponding relationship of preset be in graphical information and operational order, executes the corresponding operation of the gesture information and refer toIt enables.
Determine for after user (i.e. databases contain information people), start to track this human skeleton obtain it is leftThe coordinate information of the right hand, and coordinate is converted from the origin coordinate system transform in NiTE as the coordinate system of RGB-D video camera.It preventsOccurs coordinate system confusion during coordinates computed.As shown in figure 13, start point, coordinate using the point that coordinate is 31 as identification32 point is controlled as end point by the right hand, remaining six point is controlled by left hand, and the number of the point slipped over according to left hand is suitableThe difference of sequence, to identify gesture, Tu13Zhong, hand successively streak a little 11, point 21, point 22,23 identification of point result be 7.Such as Figure 13It is shown, successively streak a little 11, after point 12, point 13, point 23, right hand touch point 32, the information identified is L.
In the particular embodiment, the storage medium is stored with computer program, and the computer program is by processorIt is performed the steps of when execution
The image of picture pick-up device acquisition is received, and identifies all people's body skeleton on image;The human skeleton includes handPortion;
Judge whether the human skeleton information identified matches with the human skeleton information stored in database;
If so, the hand for the human skeleton that tracking matches, and corresponding hand is determined according to the change in location of the handGesture information;
According to the corresponding relationship of preset gesture information and operational order, the corresponding operational order of the gesture information is executed.
In a further embodiment, before before judging equipment with the presence or absence of the human skeleton of database storage, the meterCalculation machine program realizes following steps when being executed by processor;
The image of picture pick-up device acquisition is received, and identifies the facial image of all people on image;
Judgement identifies the facial image for whether being stored in database and matching with collected facial image;
If so, determining the face according to the corresponding relationship of the facial image and human skeleton information that store in databaseThe corresponding human skeleton information of image.
In a further embodiment, judgement identifies whether be stored in database and collected facial image phaseThe facial image matched, the computer program perform the steps of when being executed by processor
Calculate the similar value of the facial image of collected facial image and database storage;
Judge whether the similar value of the facial image of collected facial image and database storage is greater than 50%.
In a further embodiment, corresponding gesture information, the computer are determined according to the change in location of the handIt is performed the steps of when program is executed by processor
Judge whether the change in location of the hand streaks starting point;
If so, the key point for marking the change in location of the hand to streak;
Judge whether the change in location of the hand streaks end point;
If so, into next step.
In a further embodiment, according to the corresponding relationship of preset gesture information and operational order, the gesture is executedThe corresponding operational order of information, the computer program perform the steps of when being executed by processor
Parse the be in graphical information for the key point streaked;
According to the corresponding relationship of preset be in graphical information and operational order, executes the corresponding operation of the gesture information and refer toIt enables.
It should be noted that being not intended to limit although the various embodiments described above have been described hereinScope of patent protection of the invention.Therefore, it based on innovative idea of the invention, change that embodiment described herein is carried out and is repairedChange, or using equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content, it directly or indirectly will be withUpper technical solution is used in other related technical areas, is included within scope of patent protection of the invention.

Claims (10)

Translated fromChinese
1.一种手势判断方法,其特征在于,包括以下步骤:1. a gesture judgment method, is characterized in that, comprises the following steps:接收摄像设备采集的图像,并识别图像上所有的人体骨架;所述人体骨架包括手部;Receive the images collected by the camera equipment, and identify all human skeletons on the images; the human skeletons include hands;判断识别出的人体骨架信息是否与数据库中存储的人体骨架信息相匹配;Determine whether the identified human skeleton information matches the human skeleton information stored in the database;若是,则跟踪相匹配的人体骨架的手部,并根据该手部的位置变化确定对应的手势信息;If so, track the hand of the matching human skeleton, and determine the corresponding gesture information according to the position change of the hand;根据预设的手势信息和操作指令的对应关系,执行该手势信息对应的操作指令。According to the preset correspondence between the gesture information and the operation instruction, the operation instruction corresponding to the gesture information is executed.2.根据权利要求1所述的手势判断方法,其特征在于,在判断设备前是否存在数据库储存的人体骨架前,还包括:2. The gesture judging method according to claim 1, wherein before judging whether there is a human skeleton stored in a database in front of the device, it also comprises:接收摄像设备采集的图像,并识别图像上所有的人的人脸图像;Receive the images collected by the camera equipment, and identify the face images of all the people on the images;判断识别出数据库中是否存储有与采集到的人脸图像相匹配的人脸图像;Determine and identify whether a face image matching the collected face image is stored in the database;若是,则根据数据库中储存的人脸图像与人体骨架信息的对应关系,确定该人脸图像对应的人体骨架信息。If so, determine the human skeleton information corresponding to the face image according to the correspondence between the face image and the human skeleton information stored in the database.3.根据权利要求2所述的手势判断方法,其特征在于,判断识别出数据库中是否存储有与采集到的人脸图像相匹配的人脸图像,具体包括以下步骤:3. gesture judging method according to claim 2, is characterized in that, judging and recognizing whether to store in the database the face image that matches with the collected face image, specifically comprises the following steps:计算采集到的人脸图像与数据库储存的人脸图像的相似值;Calculate the similarity between the collected face image and the face image stored in the database;判断采集到的人脸图像与数据库储存的人脸图像的相似值是否大于50%。Determine whether the similarity between the collected face image and the face image stored in the database is greater than 50%.4.根据权利要求1所述的手势判断方法,其特征在于,根据该手部的位置变化确定对应的手势信息,具体包括以下步骤:4. The gesture judgment method according to claim 1, wherein determining corresponding gesture information according to the position change of the hand, specifically comprising the following steps:判断该手部的位置变化是否划过启动点;Determine whether the position change of the hand crosses the starting point;若是,标记该手部的位置变化划过的关键点;If so, mark the key points that the position of the hand has changed;判断该手部的位置变化是否划过结束点;Determine whether the position change of the hand crosses the end point;若是,则进入下一步骤。If so, go to the next step.5.根据权利要求4所述的手势判断方法,其特征在于,根据预设的手势信息和操作指令的对应关系,执行该手势信息对应的操作指令,具体包括以下步骤:5. The gesture judging method according to claim 4, wherein, according to the preset correspondence between gesture information and operation instructions, executing the operation instructions corresponding to the gesture information, specifically comprising the following steps:解析出划过的关键点的所呈图形信息;Parse out the graphical information presented by the key points drawn;根据预设的所呈图形信息与操作指令的对应关系,执行该手势信息对应的操作指令。According to the preset correspondence between the presented graphic information and the operation instruction, the operation instruction corresponding to the gesture information is executed.6.一种存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:6. A storage medium, characterized in that the storage medium stores a computer program, and the computer program implements the following steps when executed by a processor:接收摄像设备采集的图像,并识别图像上所有的人体骨架;所述人体骨架包括手部;Receive the images collected by the camera equipment, and identify all human skeletons on the images; the human skeletons include hands;判断识别出的人体骨架信息是否与数据库中存储的人体骨架信息相匹配;Determine whether the identified human skeleton information matches the human skeleton information stored in the database;若是,则跟踪相匹配的人体骨架的手部,并根据该手部的位置变化确定对应的手势信息;If so, track the hand of the matching human skeleton, and determine the corresponding gesture information according to the position change of the hand;根据预设的手势信息和操作指令的对应关系,执行该手势信息对应的操作指令。According to the preset correspondence between the gesture information and the operation instruction, the operation instruction corresponding to the gesture information is executed.7.如权利要求6所述的存储介质,其特征在于,在判断设备前是否存在数据库储存的人体骨架前,所述计算机程序被处理器执行时实现以下步骤;7. The storage medium of claim 6, wherein before judging whether there is a human skeleton stored in a database in front of the device, the computer program implements the following steps when executed by the processor;接收摄像设备采集的图像,并识别图像上所有的人的人脸图像;Receive the images collected by the camera equipment, and identify the face images of all the people on the images;判断识别出数据库中是否存储有与采集到的人脸图像相匹配的人脸图像;Determine and identify whether a face image matching the collected face image is stored in the database;若是,则根据数据库中储存的人脸图像与人体骨架信息的对应关系,确定该人脸图像对应的人体骨架信息。If so, determine the human skeleton information corresponding to the face image according to the correspondence between the face image and the human skeleton information stored in the database.8.如权利要求7所述的存储介质,其特征在于,判断识别出数据库中是否存储有与采集到的人脸图像相匹配的人脸图像,所述计算机程序被处理器执行时实现以下步骤:8. storage medium as claimed in claim 7, is characterized in that, judge and identify whether the face image that matches with the face image collected is stored in the database, and the computer program realizes the following steps when being executed by the processor :计算采集到的人脸图像与数据库储存的人脸图像的相似值;Calculate the similarity between the collected face image and the face image stored in the database;判断采集到的人脸图像与数据库储存的人脸图像的相似值是否大于50%。Determine whether the similarity between the collected face image and the face image stored in the database is greater than 50%.9.如权利要求6所述的存储介质,其特征在于,根据该手部的位置变化确定对应的手势信息,所述计算机程序被处理器执行时实现以下步骤:9. The storage medium according to claim 6, wherein the corresponding gesture information is determined according to the position change of the hand, and the computer program implements the following steps when executed by the processor:判断该手部的位置变化是否划过启动点;Determine whether the position change of the hand crosses the starting point;若是,标记该手部的位置变化划过的关键点;If so, mark the key points that the position of the hand has changed;判断该手部的位置变化是否划过结束点;Determine whether the position change of the hand crosses the end point;若是,则进入下一步骤。If so, go to the next step.10.如权利要求9所述的存储介质,其特征在于,根据预设的手势信息和操作指令的对应关系,执行该手势信息对应的操作指令,所述计算机程序被处理器执行时实现以下步骤:10. The storage medium according to claim 9, wherein, according to the preset correspondence between the gesture information and the operation instruction, the operation instruction corresponding to the gesture information is executed, and the computer program is executed by the processor to implement the following steps :解析出划过的关键点的所呈图形信息;Parse out the graphical information presented by the key points drawn;根据预设的所呈图形信息与操作指令的对应关系,执行该手势信息对应的操作指令。According to the preset correspondence between the presented graphic information and the operation instruction, the operation instruction corresponding to the gesture information is executed.
CN201810921965.2A2018-08-142018-08-14 A gesture judgment method and storage mediumPendingCN109325408A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810921965.2ACN109325408A (en)2018-08-142018-08-14 A gesture judgment method and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810921965.2ACN109325408A (en)2018-08-142018-08-14 A gesture judgment method and storage medium

Publications (1)

Publication NumberPublication Date
CN109325408Atrue CN109325408A (en)2019-02-12

Family

ID=65263476

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810921965.2APendingCN109325408A (en)2018-08-142018-08-14 A gesture judgment method and storage medium

Country Status (1)

CountryLink
CN (1)CN109325408A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111274948A (en)*2020-01-192020-06-12杭州微洱网络科技有限公司 Detection method for key points of human feet and shoes in e-commerce images
CN112036213A (en)*2019-06-032020-12-04安克创新科技股份有限公司 A robot gesture positioning method, robot and device
CN112270302A (en)*2020-11-172021-01-26支付宝(杭州)信息技术有限公司Limb control method and device and electronic equipment
CN113031464A (en)*2021-03-222021-06-25北京市商汤科技开发有限公司Device control method, device, electronic device and storage medium
CN113547524A (en)*2021-08-162021-10-26长春工业大学 A human-computer interaction control method for an upper limb exoskeleton robot
CN113842209A (en)*2021-08-242021-12-28深圳市德力凯医疗设备股份有限公司Ultrasound apparatus control method, ultrasound apparatus, and computer-readable storage medium
CN114138104A (en)*2020-09-042022-03-04阿里巴巴集团控股有限公司Electronic equipment control method and device and electronic equipment
WO2025180116A1 (en)*2024-02-292025-09-04万有引力(宁波)电子科技有限公司Gesture tracking method and apparatus, device, readable storage medium, and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120309530A1 (en)*2011-05-312012-12-06Microsoft CorporationRein-controlling gestures
CN103309446A (en)*2013-05-302013-09-18上海交通大学Virtual data acquisition and transmission system taking both hands of humankind as carrier
CN105843378A (en)*2016-03-172016-08-10中国农业大学Service terminal based on somatosensory interaction control and control method of the service terminal
CN106527674A (en)*2015-09-142017-03-22上海羽视澄蓝信息科技有限公司Human-computer interaction method, equipment and system for vehicle-mounted monocular camera
CN106933236A (en)*2017-02-252017-07-07上海瞬动科技有限公司合肥分公司The method and device that a kind of skeleton control unmanned plane is let fly away and reclaimed

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120309530A1 (en)*2011-05-312012-12-06Microsoft CorporationRein-controlling gestures
CN103309446A (en)*2013-05-302013-09-18上海交通大学Virtual data acquisition and transmission system taking both hands of humankind as carrier
CN106527674A (en)*2015-09-142017-03-22上海羽视澄蓝信息科技有限公司Human-computer interaction method, equipment and system for vehicle-mounted monocular camera
CN105843378A (en)*2016-03-172016-08-10中国农业大学Service terminal based on somatosensory interaction control and control method of the service terminal
CN106933236A (en)*2017-02-252017-07-07上海瞬动科技有限公司合肥分公司The method and device that a kind of skeleton control unmanned plane is let fly away and reclaimed

Cited By (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112036213A (en)*2019-06-032020-12-04安克创新科技股份有限公司 A robot gesture positioning method, robot and device
CN111274948A (en)*2020-01-192020-06-12杭州微洱网络科技有限公司 Detection method for key points of human feet and shoes in e-commerce images
CN111274948B (en)*2020-01-192021-07-30杭州微洱网络科技有限公司 Detection method for key points of human feet and shoes in e-commerce images
CN114138104A (en)*2020-09-042022-03-04阿里巴巴集团控股有限公司Electronic equipment control method and device and electronic equipment
CN114138104B (en)*2020-09-042025-04-01阿里巴巴集团控股有限公司 Electronic device control method, device and electronic device
CN112270302A (en)*2020-11-172021-01-26支付宝(杭州)信息技术有限公司Limb control method and device and electronic equipment
CN113031464A (en)*2021-03-222021-06-25北京市商汤科技开发有限公司Device control method, device, electronic device and storage medium
CN113547524A (en)*2021-08-162021-10-26长春工业大学 A human-computer interaction control method for an upper limb exoskeleton robot
CN113547524B (en)*2021-08-162022-04-22长春工业大学 A human-computer interaction control method for an upper limb exoskeleton robot
CN113842209A (en)*2021-08-242021-12-28深圳市德力凯医疗设备股份有限公司Ultrasound apparatus control method, ultrasound apparatus, and computer-readable storage medium
CN113842209B (en)*2021-08-242024-02-09深圳市德力凯医疗设备股份有限公司Ultrasonic device control method, ultrasonic device and computer readable storage medium
WO2025180116A1 (en)*2024-02-292025-09-04万有引力(宁波)电子科技有限公司Gesture tracking method and apparatus, device, readable storage medium, and program product

Similar Documents

PublicationPublication DateTitle
Sincan et al.Using motion history images with 3D convolutional networks in isolated sign language recognition
CN109325408A (en) A gesture judgment method and storage medium
Sagayam et al.Hand posture and gesture recognition techniques for virtual reality applications: a survey
CN106951867B (en)Face identification method, device, system and equipment based on convolutional neural networks
CN106682598B (en)Multi-pose face feature point detection method based on cascade regression
Patruno et al.People re-identification using skeleton standard posture and color descriptors from RGB-D data
CN111160269A (en) A method and device for detecting facial key points
Chandel et al.Hand gesture recognition for sign language recognition: A review
CN111967363B (en)Emotion prediction method based on micro-expression recognition and eye movement tracking
CN107767335A (en)A kind of image interfusion method and system based on face recognition features' point location
WO2020078119A1 (en)Method, device and system for simulating user wearing clothing and accessories
CN106570491A (en)Robot intelligent interaction method and intelligent robot
CN105536205A (en)Upper limb training system based on monocular video human body action sensing
CN104598888B (en)A kind of recognition methods of face gender
CN107392151A (en)Face image various dimensions emotion judgement system and method based on neutral net
CN110046544A (en)Digital gesture identification method based on convolutional neural networks
Meshram et al.Convolution neural network based hand gesture recognition system
CN117423134A (en)Human body target detection and analysis multitasking cooperative network and training method thereof
Shah et al.Survey on vision based hand gesture recognition
CN120220183A (en) Occluded human posture key point recognition method based on sparse attention feature enhancement
Harini et al.A novel static and dynamic hand gesture recognition using self organizing map with deep convolutional neural network
CN108108648A (en)A kind of new gesture recognition system device and method
Zhao et al.Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection
Xavier et al.Real-time Hand Gesture Recognition Using MediaPipe and Artificial Neural Networks
Mazumder et al.Finger gesture detection and application using hue saturation value

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20190212


[8]ページ先頭

©2009-2025 Movatter.jp