The content of the invention
It is an object of the invention to provide a kind of robot head gestural control method and system, allow robot according to quiltThe gesture of survey personnel is acted, the interactive operation between the actual doctors and patients of true simulation, to provide exercising platform for doctor.
To achieve the above object, the invention provides following scheme:
A kind of robot head gestural control method, methods described includes:
The gesture shape of tested personnel's hand is recognized, the gesture shape recognition result is obtained;The gesture shape identificationAs a result first gesture shape, second gesture shape and the 3rd gesture shape are included;
When the gesture shape recognition result is the first gesture shape, by the tracking mark set in the robotWill position position, triggers the robot and enters tracking SBR, preparation starts to track the hand motion of the tested personnel;
When the gesture shape recognition result is the second gesture shape and tracking mark position has been set, instituteThe hand motion for stating tested personnel described in robotic tracking carries out end rotation motion;
When the gesture shape recognition result is three gesture shape, tracking mark position is reset, stoppedThe end rotation motion of the robot, the robot head is fixed on stop position.
Optionally, the gesture shape of identification tested personnel's hand, obtains the gesture shape recognition result, specific bagInclude:
Obtain the coloured image and depth image of tested personnel's hand;
Gesture foreground picture is obtained according to the coloured image and the depth image;
The gesture shape of the tested personnel is identified according to the gesture foreground picture, the gesture shape identification knot is obtainedReally.
Optionally, it is described that gesture foreground picture is obtained according to the coloured image and the depth image, specifically include:
The depth image is handled using Threshold Segmentation Algorithm, image district of the gray value in setting range is extractedDomain is used as foreground area;
The coloured image of the foreground area is obtained according to correspondence position of the foreground area in the coloured image;
Histogram is set up according to features of skin colors;
The coloured image of the foreground area is transformed into corresponding color space;
Back projection is carried out in the color space according to the histogram and obtains probability graph;
Denoising is carried out to the probability graph using morphological erosion expansion algorithm and Threshold Segmentation Algorithm, obtains describedGesture foreground picture.
Optionally, the gesture shape that the tested personnel is identified according to the gesture foreground picture, obtains the handGesture shape recognition result, is specifically included:
Calculate the characteristic vector of the gesture foreground picture;
The characteristic vector is classified using SVMs, gesture classification result is obtained;
The gesture shape of tested personnel's hand is identified according to the gesture classification result, the gesture shape is obtainedRecognition result.
Optionally, it is described when the gesture shape recognition result be the second gesture shape and the tracking mark positionWhen being set, the hand motion of tested personnel described in the robotic tracking carries out end rotation motion, specifically includes:
The direction of rotation of the robot head is determined according to the probability graph;
The horizontal rotation speed and vertical rotary speed of the robot head are calculated according to the probability graph;
The robot head is controlled according to the direction of rotation, the horizontal rotation speed and the vertical rotary speedHorizontally rotated according to the direction of rotation and the horizontal rotation speed, according to the direction of rotation and the vertical rotationSpeed is rotated vertically.
The invention also discloses a kind of robot head gestural control system, the system includes:
Gesture shape recognition result acquisition module, the gesture shape for recognizing tested personnel's hand obtains the gestureShape recognition result;The gesture shape recognition result includes first gesture shape, second gesture shape and the 3rd gesture shape;
First gesture shape control module, for when the gesture shape recognition result be the first gesture shape when,By the tracking mark position set in the robot position, trigger the robot and enter tracking SBR, preparation start withThe hand motion of tested personnel described in track;
Second gesture shape control module, for being the second gesture shape and institute when the gesture shape recognition resultTracking mark position is stated when being set, controls the hand motion of tested personnel described in the robotic tracking to carry out end rotation fortuneIt is dynamic;
3rd gesture shape control module, for when the gesture shape recognition result be three gesture shape when,Tracking mark position is reset, stops the end rotation motion of the robot, the robot head, which is fixed on, to stopStop bit is put.
Optionally, the gesture shape recognition result acquisition module is specifically included:
Image acquisition submodule, coloured image and depth image for obtaining tested personnel's hand;
Gesture foreground picture acquisition submodule, for obtaining gesture prospect according to the coloured image and the depth imageFigure;
Gesture shape recognition result acquisition submodule, for identifying the tested personnel's according to the gesture foreground pictureGesture shape, obtains the gesture shape recognition result.
Optionally, the gesture foreground picture acquisition submodule is specifically included:
Foreground area extraction unit, for being handled using Threshold Segmentation Algorithm the depth image, extracts gray scaleThe image-region being worth in setting range is used as foreground area;
Prospect color image taking unit, for being obtained according to correspondence position of the foreground area in the coloured imageObtain the coloured image of the foreground area;
Histogram sets up unit, for setting up histogram according to features of skin colors;
Image conversion unit, for the coloured image of the foreground area to be transformed into corresponding color space;
Probability graph acquiring unit, probability is obtained for carrying out back projection in the color space according to the histogramFigure;
Gesture foreground picture acquiring unit, for using morphological erosion expansion algorithm and Threshold Segmentation Algorithm to the probabilityFigure carries out denoising, obtains the gesture foreground picture.
Optionally, the gesture shape recognition result acquisition submodule is specifically included:
Characteristic vector computing unit, the characteristic vector for calculating the gesture foreground picture;
Gesture classification result acquiring unit, for classifying using SVMs to the characteristic vector, obtains handGesture classification results;
Gesture shape recognition result acquiring unit, for identifying tested personnel's hand according to the gesture classification resultThe gesture shape in portion, obtains the gesture shape recognition result.
Optionally, the second gesture shape control module is specifically included:
Direction of rotation acquisition submodule, the direction of rotation for determining the robot head according to the probability graph;
Rotary speed calculating sub module, the horizontal rotation speed for calculating the robot head according to the probability graphWith vertical rotary speed;
Rotary motion control submodule, for according to the direction of rotation, the horizontal rotation speed and the vertical rotationRotary speed controls the robot head to be horizontally rotated according to the direction of rotation and the horizontal rotation speed, according to instituteState direction of rotation and the vertical rotary speed is rotated vertically.
The specific embodiment provided according to the present invention, the invention discloses following technique effect:
The invention provides a kind of robot head gestural control method and system.Methods described recognizes tested personnel firstThe gesture shape of hand, obtains the gesture shape recognition result;The gesture shape recognition result include first gesture shape,Second gesture shape and the 3rd gesture shape;, will be described when the gesture shape recognition result is the first gesture shapeThe tracking mark position position set in robot, triggers the robot and enters tracking SBR, it is described that preparation starts trackingThe hand motion of tested personnel;When the gesture shape recognition result be the second gesture shape and the tracking mark positionWhen being set, the hand motion of tested personnel described in the robotic tracking carries out end rotation motion;When the gesture shapeWhen recognition result is three gesture shape, tracking mark position is reset, stops the head rotation of the robotTranshipment is dynamic, and the robot head is fixed on stop position.Methods described and system are controlled by the different gestures of tested personnelRobot processed is acted accordingly, can truly be simulated the interactive operation between actual doctors and patients, be provided well for doctorEffective traditional Chinese medical science rotation class gimmick exercising platform.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, completeSite preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based onEmbodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not madeEmbodiment, belongs to the scope of protection of the invention.
It is an object of the invention to provide a kind of robot head gestural control method and system.
In order to facilitate the understanding of the purposes, features and advantages of the present invention, it is below in conjunction with the accompanying drawings and specific realApplying mode, the present invention is further detailed explanation.
Fig. 1 is the method flow diagram of robot head gestural control method of the embodiment of the present invention.
Referring to Fig. 1, a kind of robot head gestural control method, including:
Step 101:The gesture shape of tested personnel's hand is recognized, the gesture shape recognition result is obtained.The gestureShape recognition result includes first gesture shape, second gesture shape and the 3rd gesture shape.
In step 101, the gesture shape of identification tested personnel's hand obtains the gesture shape recognition result, toolBody includes:
Step 1011:Obtain the coloured image and depth image of tested personnel's hand.
The coloured image and the depth image shoot acquisition by being fixed on the imaging sensor of robot head.InstituteState the Kinect that imaging sensor is Microsoft.
Step 1012:Gesture foreground picture is obtained according to the coloured image and the depth image.
The step 1012 is specifically included:
Step (1):The depth image is handled using Threshold Segmentation Algorithm, gray value is extracted in setting rangeImage-region be used as foreground area.
The light and shade of the depth image of gained represents the distance of object distance camera lens representated by current pixel point herein, falseIf there is a cupboard in camera lens apart from 5 meters of camera lens, a flat people raised one's hand is apart from 3 meters of camera lens, and wherein the hand of people is apart from camera lens2.5 meters, then what is shown in the obtained depth image is exactly the patch of the shape of a dark cupboard, and one brighterHuman body shape patch, also have the brighter patch of hand shape on this human body patch (because for hand is compared with human bodyApart from camera lens closer to), then according to light and shade (gray value) set threshold value with regard to the object of different distance can be partitioned into.In the present embodimentThe depth image is handled using Threshold Segmentation Algorithm, image-region of the gray value in setting range is extracted as precedingScene area, exactly in order to be partitioned into hand region in image background.Specific method is:The depth image is traveled through, by gray scalePixel intensity of the value in setting range retains, and the pixel outside setting range is set to 0, thus can be by the prospectSplit described in region from view picture in depth image.
Step (2):The foreground area is obtained according to correspondence position of the foreground area in the coloured imageColoured image.
The foreground area is exactly the image-region of tested personnel's hand, extracts figure of the gray value in setting rangeAfter picture region is as foreground area, according to position of the foreground area in the depth image, against the cromogramIdentical region is with regard to that can obtain the coloured image of the foreground area as in.Then according still further to skin color segmentation image, other are removedIt is not the object of the colour of skin (such as close to clothing of hand etc.), it becomes possible to obtain the gesture foreground picture.
Step (3):Histogram is set up according to features of skin colors.
The features of skin colors is exactly the feature that human body complexion has, and the features of skin colors can be obtained from many places, this realityApply in the method described in example, be the picture by choosing tested personnel's hand in advance, then the colour of skin to the hand is enteredRow statistics obtains the features of skin colors, is then compared with the similar features in application scenarios, is subject to certain calculating with mutualDistinguish, the specific features of skin colors obtained from.
Histogram described in the present embodiment uses Cr-Cb two-dimensional histogram.Initially set up one 50 × 50 twoHistogram is tieed up, statistics falls into the number of the pixel of each block in the two-dimensional histogram, sets up the institute of the features of skin colorsState two-dimensional histogram.Similarly, then the two-dimensional histogram of current scene in the coloured image for obtaining the foreground area is counted, contrastThe histogrammic difference of scene and the colour of skin, more significant feature in colour of skin histogram is left, it is easy to the feature that background is obscuredDelete, just obtained the last histogram, the histogram is normalized, its scope is fallen between 0-255.
Step (4):The coloured image of the foreground area is transformed into corresponding color space.
The features of skin colors has the face used in the different forms of expression, the present embodiment under different colours spaceThe colour space is YCrCb color spaces.
Step (5):Back projection is carried out in the color space according to the histogram and obtains probability graph.
In the histogram above set up, for certain point, abscissa is Cr value, and ordinate is Cb value, shouldThe value of point represents the number (frequency is may be considered after normalization) of the pixel with Cr, Cb value, in the coloured imageFull figure is traveled through again, and to any pixel point, Cr, Cb value for the point inquire about the corresponding frequency in the histogram, by thisFrequency and then obtains the probability graph as the brightness of the point, and bright-dark degree's representative of certain pixel should in the probability graphPoint is the probability size of tested personnel's hand skin, and the point is brighter, and the probability is bigger.
Step (6):Denoising is carried out to the probability graph using morphological erosion expansion algorithm and Threshold Segmentation Algorithm,Obtain the gesture foreground picture.
The probability graph is handled using morphological erosion expansion algorithm and Threshold Segmentation Algorithm, the probability is removedInfluence of noise in figure, obtains the gesture foreground picture, and the gesture foreground picture is a width black and white gray level image.
Step 1013:The gesture shape of the tested personnel is identified according to the gesture foreground picture, the gesture is obtainedShape recognition result.
The step 1013 is specifically included:
Step is 1.:Calculate the characteristic vector of the gesture foreground picture.
Geometric invariant moment (Hu squares) feature of the gesture foreground picture is calculated, hand described in the gesture foreground picture is calculatedFinger tip number, calculate the girth and area ratio of the gesture foreground picture.
The Hu moment characteristics, the finger tip number and the girth and area ratio are spliced into a row vector as work asThe characteristic vector of the preceding gesture foreground picture.For example calculate the obtained Hu be characterized as [0.8,0.1,0.01,0,0,0,0], the finger tip number is calculated as 3, and the girth is 0.02 with area ratio, then the characteristic vector that splicing is obtained is justIt is [0.8,0.1,0.01,0,0,0,0,3,0.02].
Step is 2.:The characteristic vector is classified using SVMs, gesture classification result is obtained.
The characteristic vector is classified using the grader of trained completion, such as using the SVMsAlgorithm for Training grader, is then classified using the grader to the characteristic vector, obtains the gesture classification result.
Step is 3.:The gesture shape of tested personnel's hand is identified according to the gesture classification result, obtains describedGesture shape recognition result.
Fig. 2 is the schematic diagram of gesture shape recognition result described in the embodiment of the present invention.Gesture shape of the present invention is knownOther result includes three kinds of gesture shapes, respectively first gesture shape, second gesture shape and the 3rd gesture shape.Wherein,One gesture shape is used to represent that the triggering robot prepares to start tracking, and second gesture shape is used to represent that the robot is openedThe gesture for beginning to track the tested personnel carries out end rotation motion, and the 3rd gesture shape is used to represent tracking stopping.Referring to figure2, in the present embodiment, using the gesture shape shown in Fig. 2 (a) as the first gesture shape, using the hand shown in Fig. 2 (b)Gesture shape is used as the 3rd gesture shape as the second gesture shape using the gesture shape shown in Fig. 2 (c).In realityIn the application of border, the different gesture shapes can be arranged as required to as first, second, and third gesture shape.
Step 102:When the gesture shape recognition result is the first gesture shape, it will be set in the robotTracking mark position position, trigger the robot and enter tracking SBR, preparation starts to track the hand of the tested personnelPortion is acted.
When the gesture shape recognition result is the first gesture shape shown in Fig. 2 (a), by the robotThe tracking mark position position of setting, triggers the robot and enters tracking SBR, preparation starts to track the tested personnelHand motion.The tracking mark position is a kind of protection setting to the robot motion, and robot is rotatedIt is preceding can all detect tracking mark position whether set, if without set, the robot is not carried out movement instruction, i.e., will not be withThe hand motion of tested personnel described in track is rotated.
Step 103:When the gesture shape recognition result be the second gesture shape and the tracking mark position byDuring set, the hand motion of tested personnel described in the robotic tracking carries out end rotation motion.
When the gesture shape recognition result is the second gesture shape shown in Fig. 2 (b) and tracking mark positionWhen being set, the hand motion that the robot starts to track the tested personnel carries out end rotation motion.By to instituteProbability graph is stated to carry out can be calculated coordinate of the presently described gesture shape in the coloured image, then to the gesture shapeImage coordinate carry out calculate can obtain the movement velocity in each joint of robot.
The robot is the training robot that class gimmick training is rotated towards the traditional Chinese medical science, and the robot is used for simulating cervical vertebraPatient for doctor to provide exercising platform.The training robot head neck has two joints, wherein the first joint can be with waterFlat rotation, second joint can rotate vertically, and the structural simulation human cervical spine with a kind of variation rigidity.
Step 103 is specifically included:
Step 1031:The direction of rotation of the robot head is determined according to the probability graph.
First, the coordinate that any point is defined in the probability graph is (x, y), and the gray value of (x, y) point is p (x, y), instituteThe p+q rank geometric moments for stating probability graph are:
Then:
M00=∑ p (x, y) (2)
M10=∑ xp (x, y) (3)
M01=∑ yp (x, y) (4)
Center of gravity P of the second gesture shape in the probability graphc(xc,yc) be:
Wherein, xcRepresent the x coordinate of the center of gravity, ycRepresent the y-coordinate of the center of gravity.
The state space that present image plane is image is defined, described image plane is to be differentiated according to described image sensorThe plane that rate is defined, described image and described image plane are identical with the resolution ratio of described image sensor, such as when useWhen described image sensor resolution ratio is 1440 × 900, the resolution sizes of described image plane and described image are also 1440×900.Then the current state space is:
X=(xc,yc)T (7)
Steady-state spatial is defined in the state space for Ω s,
Wherein, uwThe width of the state space is represented, β represents proportionality coefficient, and β is that value is being less than 1/2nd justNumber, vhRepresent the height of the state space.
Coordinate of the presently described second gesture shape in the state space is calculated according to the probability graph.According to describedRelative position relation between coordinate and the steady-state spatial border, determines the direction of rotation of the robot head.
Fig. 3 is the schematic diagram of state space of the present invention and the steady-state spatial coordinate system.As shown in figure 3, u0,u1,v0,v1Left and right, the upper and lower border of the steady-state spatial Ω s is represented respectively.When the second gesture shape is in the state spaceIn horizontal coordinate be located at the left side of the steady-state spatial left margin, i.e., the value of described horizontal coordinate is less than u0When, it is determined that describedRobot head turns clockwise;When the horizontal coordinate is more than u1When, determine the robot head rotate counterclockwise.Also may be usedTo be less than u when the value of the horizontal coordinate0When, determine the robot head rotate counterclockwise;When the horizontal coordinate is more than u1When, determine that the robot head turns clockwise.When vertical seat target value of the mark in the state space is smallIn v0When, determine that the robot head rotates to direction of bowing;When the horizontal coordinate is more than v1When, determine the robotHead rotates to new line direction;Or when the vertical seat target value is less than v0When, determine the robot head to new line directionRotation;When the horizontal coordinate is more than v1When, determine that the robot head rotates to direction of bowing.
Step 1032:The horizontal rotation speed and vertical rotation speed of the robot head are calculated according to the probability graphDegree.
According to coordinate position of the presently described second gesture shape in the state space and the steady-state spatial borderPosition calculate site error, calculate the site error be exactly the current state space of the second gesture shape withThe steady-state spatial closest to border make it is poor.
According to the state space and the steady-state spatial, the current site error is calculated, the site error e'sCalculation formula is as follows:
Wherein, R represents the conversion formula on the steady-state spatial border,Wherein a, b, c, d takesValue is respectively:
Wherein, c is a column vector, represents the border of the steady-state spatial,u0Represent the steady-state spatial ΩS border, u1The right margin of the steady-state spatial Ω s, v are represented respectively0The coboundary of the steady-state spatial Ω s, v are represented respectively1The lower boundary of the steady-state spatial Ω s is represented respectively.
Wherein, X represents the current state space, X=(xc,yc)T。
The input u for controlling the robot head rotary speed is calculated according to the site error et,
ut=ke (14)
Wherein, k represents scaling coefficient, the scaling for carrying out the site error,ItsMiddle ηu,ηvRepresent proportionality coefficient, the ηu、ηvIt is two constants.
In order that the rotary motion of the robot is more steady, sign function is asked for the site error, i.e.,:
Wherein, euRepresent error of the site error in image level direction, evRepresent that the site error is perpendicular in imageNogata to error,The speed of described image horizontal direction is represented,Represent the speed of described image vertical direction.
The speed of service w for obtaining the robot end is calculated using image turnxAnd wy, calculation formula is as follows:
Wherein, ωxRepresent the rotary speed rotated around image transverse axis of the robot head, that is, the machineThe vertical rotary speed of head part;ωyThe rotary speed rotated around the image longitudinal axis of the robot head is represented, alsoIt is the horizontal rotation speed of the robot head;(up,vp) be described image sensor image coordinate system principal point;λ tablesShow that described image sensor focal length is converted into the length of pixel;U represents the second gesture shape on the probability graphRow coordinate, v represents row coordinate of the second gesture shape on the probability graph;JsRepresent image turn,
Described robot is the training robot that class gimmick training is rotated towards the traditional Chinese medical science, and described robot is used for simulatingCervical spondylopathy patient for doctor to provide exercising platform.The artificial two-articulated robot of described machine, the incidence tool of the robotThere are two joints, wherein the first joint can be horizontally rotated, second joint can rotate vertically, and use a kind of variation rigidityStructural simulation human cervical spine.
Step 1033:According to the direction of rotation, the vertical rotary speed ωxWith the horizontal rotation speed omegayControlThe robot head is horizontally rotated according to the direction of rotation and the horizontal rotation speed, according to the direction of rotationRotated vertically with the vertical rotary speed.When the direction of rotation represents to turn clockwise, described first is controlled to closeSection is turned clockwise according to the horizontal rotation speed, and when the direction of rotation represents rotate counterclockwise, control is describedFirst joint carries out rotate counterclockwise according to the horizontal rotation speed, that is, controls the left-right rotation of the robot head.TogetherWhen the direction of rotation represents to rotate to direction of bowing, control the second joint according to the vertical rotary speed to instituteState direction of bowing to rotate, when the direction of rotation represents to rotate to new line direction, control the second joint according to described perpendicularDirect rotary rotary speed rotates to the new line direction, that is, controls rotating upwardly and downwardly for the robot head.
Step 104:It is when the gesture shape recognition result is three gesture shape, tracking mark position is clearZero, stop the end rotation motion of the robot, the robot head is fixed on stop position.
When the robot head rotates to exercise desired position, gesture is replaced by shown in Fig. 2 (c) by operating personnelThe 3rd gesture shape.Now the gesture shape recognition result is the 3rd gesture shape, and tracking mark position is clearZero, stop the end rotation motion of the robot, the robot head is fixed on stop position.That is, described machineHead part is tracked behind position needed for the second gesture shape is rotated to, static to be fixed on position needed for exercise.
Fig. 4 is the signal moved using robot head gestural control method control machine head part of the present inventionFigure.As shown in figure 4, tested personnel's hand 401 is placed in the front of described image sensor 402, according to rotation position needsMake three kinds of gesture shapes as shown in figure (2).The robot head has first joint 403 and the second joint404。
When tested personnel's hand makes the first gesture shape as shown in Fig. 2 (a), the now sign-shapedShape recognition result is the first gesture shape, by the tracking mark position set in the robot position, triggers the machinePeople enters tracking SBR, and preparation starts to track the hand motion of the tested personnel.
Next it is now described when tested personnel's hand makes the second gesture shape as shown in Fig. 2 (b)Gesture shape recognition result is the second gesture shape and the tracking mark position when being set, the robot start withThe hand motion of tested personnel described in track carries out end rotation motion.According to the direction of rotation, the vertical rotation calculatedRotary speed ωxWith the horizontal rotation speed omegay, first joint 403 of the robot head is controlled according to the rotationDirection and the horizontal rotation speed are horizontally rotated, when the direction of rotation represents to turn clockwise, and control described theOne joint 403 is turned clockwise according to the horizontal rotation speed, when the direction of rotation represents rotate counterclockwise, controlMake first joint 403 and carry out rotate counterclockwise according to the horizontal rotation speed, that is, control a left side for the robot headTurn right dynamic.The second joint 404 of the robot head is controlled according to the direction of rotation and the vertical rotation simultaneouslySpeed is rotated vertically, when the direction of rotation represents to rotate to direction of bowing, and controls the second joint 404 according to instituteState vertical rotary speed to rotate to the direction of bowing, when the direction of rotation represents to rotate to new line direction, control is describedSecond joint 404 rotates according to the vertical rotary speed to the new line direction, that is, controls above and below the robot headRotate.
When the robot head rotates to exercise desired position, the tested personnel (operator) changes gestureFor the 3rd gesture shape shown in Fig. 2 (c).Now the gesture shape recognition result is the 3rd gesture shape, will be describedTracking mark position is reset, and stops the end rotation motion of the robot, and the robot head is fixed on stop position.The final second gesture shape movement for having made robotic tracking tested personnel is to the angles and positions of exercise needs, Neng GouzhenInteractive operation between the actual doctors and patients of real simulation, the exercising platform for the treatment of skill is provided for doctor.
Fig. 5 is the structural representation of robot head gestural control system of the embodiment of the present invention.
As shown in figure 5, the robot head gestural control system, including:
Gesture shape recognition result acquisition module 501, the gesture shape for recognizing tested personnel's hand obtains the handGesture shape recognition result;The gesture shape recognition result includes first gesture shape, second gesture shape and the 3rd sign-shapedShape;
First gesture shape control module 502, for being the first gesture shape when the gesture shape recognition resultWhen, by the tracking mark position set in the robot position, trigger the robot and enter tracking SBR, prepare to startTrack the hand motion of the tested personnel;
Second gesture shape control module 503, for being the second gesture shape when the gesture shape recognition resultAnd tracking mark position is when being set, the hand motion of tested personnel described in the robotic tracking is controlled to carry out head rotationTranshipment is dynamic;
3rd gesture shape control module 504, for being the 3rd gesture shape when the gesture shape recognition resultWhen, tracking mark position is reset, stops the end rotation motion of the robot, the robot head is fixed onStop position.
Wherein, the gesture shape recognition result acquisition module 501 is specifically included:
Image acquisition submodule, coloured image and depth image for obtaining tested personnel's hand;
Gesture foreground picture acquisition submodule, for obtaining gesture prospect according to the coloured image and the depth imageFigure;
Gesture shape recognition result acquisition submodule, for identifying the tested personnel's according to the gesture foreground pictureGesture shape, obtains the gesture shape recognition result.
Wherein, the gesture foreground picture acquisition submodule is specifically included:
Foreground area extraction unit, for being handled using Threshold Segmentation Algorithm the depth image, extracts gray scaleThe image-region being worth in setting range is used as foreground area;
Prospect color image taking unit, for being obtained according to correspondence position of the foreground area in the coloured imageObtain the coloured image of the foreground area;
Histogram sets up unit, for setting up histogram according to features of skin colors;
Image conversion unit, for the coloured image of the foreground area to be transformed into corresponding color space;
Probability graph acquiring unit, probability is obtained for carrying out back projection in the color space according to the histogramFigure;
Gesture foreground picture acquiring unit, for using morphological erosion expansion algorithm and Threshold Segmentation Algorithm to the probabilityFigure carries out denoising, obtains the gesture foreground picture.
Wherein, the gesture shape recognition result acquisition submodule is specifically included:
Characteristic vector computing unit, the characteristic vector for calculating the gesture foreground picture;
Gesture classification result acquiring unit, for classifying using SVMs to the characteristic vector, obtains handGesture classification results;
Gesture shape recognition result acquiring unit, for identifying tested personnel's hand according to the gesture classification resultThe gesture shape in portion, obtains the gesture shape recognition result.
Wherein, the second gesture shape control module 503 is specifically included:
Direction of rotation acquisition submodule, the direction of rotation for determining the robot head according to the probability graph;
Rotary speed calculating sub module, the horizontal rotation speed for calculating the robot head according to the probability graphWith vertical rotary speed;
Rotary motion control submodule, for according to the direction of rotation, the horizontal rotation speed and the vertical rotationRotary speed controls the robot head to be horizontally rotated according to the direction of rotation and the horizontal rotation speed, according to instituteState direction of rotation and the vertical rotary speed is rotated vertically.
Robot head gestural control system of the present invention, can be controlled according to the gesture shape of tested personnel's handThe rotation and stopping of robot head, make the robot head move to the angles and positions of exercise needs, being capable of true mouldIntend the interactive operation between actual doctors and patients, the exercising platform for the treatment of skill is provided for doctor.
Specific case used herein is set forth to the principle and embodiment of the present invention, and above example is saidThe bright method and its core concept for being only intended to help to understand the present invention;Simultaneously for those of ordinary skill in the art, foundationThe thought of the present invention, will change in specific embodiments and applications.In summary, this specification content is notIt is interpreted as limitation of the present invention.