Embodiment one
The present embodiment one provides a kind of method of man-machine interaction, includes successively:
Step 11)The step of for recognizing target object;
It will be understood by those skilled in the art that the target object refers to the thing that interactive relation is set up with human-computer interaction deviceBody;For example, there is the first object, the second object and third body in space, and only the second object is built with the interactive deviceGrade separation mutual relation, then second object is target object.
Step 12)For detecting the target object, so as to obtain between the target object and human-computer interaction deviceThe step of position relationship information;
It will be understood by those skilled in the art that be can determine whether out according to the position relationship information relative to the man-machine interactionDevice, the target object is inactive state or motion state, and motion direction, speed and acceleration.The scene isRefer to by the position relationship information, the position relationship information of target object, target object of human-computer interaction device be inactive state orMotion state, and motion direction, the information of speed and acceleration, the number for utilizing two dimension or 3-D graphic generation technique to set upWord scene.For the digitlization scene of two dimension, target is detected by the distance measurement element of two or more diverse location simultaneouslyThe distance of object and device, can calculate position coordinates of the target object relative to device.For three-dimensional digitlization scene,By the distance measurement element of more than three diverse locations while the distance of detecting objects body and device, can calculate targetPosition coordinates of the object relative to device.So it is achieved that identification and function of the detecting object from the position of mobile phone and formation instituteState the model of target object and build the function of scene.
Step 13)The step of operating is performed for human-computer interaction device.
It will be understood by those skilled in the art that by recognizing target object, then detecting objects body phase for man-machine firstThe position relationship information of interactive device, the position relationship information obtained according to detection sets up the model of target object and builds fieldScape.Perpetually iterate over above-mentioned steps, can obtain relative to the human-computer interaction device, the target object be inactive state orMotion state, and motion direction, the information of speed and acceleration.According to target object of the object in the stereo sceneIt is inactive state or motion state, and direction, speed and the acceleration moved performs corresponding operation, it is achieved thereby that non-connectThe function of tactile man-machine interaction.
Further, the step 12)It can also include:
Step 121)The step of position relationship information for detecting objects body profile.
It will be understood by those skilled in the art that substituting object with geometric figure in digitlization scene.Pass through objectThe position relationship information of body profile, using two dimension or 3-D graphic generation technique, can build geometric figure, in the sceneSubstitute target object.The geometric figure is the model of target object.So it is achieved that structure target object mould in the sceneThe function of type.
Further, the step 13)The step of human-computer interaction device performs operation, including:
Step 131)The step of for choosing figure layer.
Further, the step 131)The step of for choosing figure layer, including:
Step 1311)The step of very first time length threshold is set;
Step 1312)According to the position relationship information, judge the object current location whether with non-selected figure layerNon-icon region is corresponding, if corresponding, performs step 1313);If be not correspond to, do not perform for choosing figure layerAssociative operation;
Step 1313)According to the position relationship information, record the object and rest on and described and non-selected figure layerThe time span of the corresponding position in non-icon region;
Step 1314)Compare the time value and the magnitude relationship of the very first time length threshold of setting, when the timeWhen value is more than the very first time length threshold, the operation for choosing the figure layer is performed.
It will be understood by those skilled in the art that the operating habit of different user is not quite similar, so corresponding optimal firstTime span threshold value also usually requires to set different numerical value according to the operating habit of different user.By setting the very first time longThreshold value is spent, adaptation of the human-computer interaction device to the operating habit of different user is realized.The operation screen of human-computer interaction device leads toOften it is divided into icon area and non-icon region, user can perform the function corresponding to icon by choosing icon.Man-machine friendshipThe operation screen of mutual device is generally also provided with more than one figure layer, and user can realize different figures by choosing figure layer thereinSwitching between layer.Mapped by the coordinate of the scene of structure and the operation screen of device, target object can be obtained in deviceOperation screen in position.If the position corresponding to target object in the operation screen of device is non-selected figure layerNon-icon region, then start recording object rest on the time span in the region.Set if object residence time length is more thanFixed very first time length threshold, performs the operation for choosing the figure layer.So it is achieved that the step of choosing figure layer.
Further, the step 13)The step of human-computer interaction device performs operation, in step 131)Afterwards, it can also wrapInclude:
Step 132)The step of for moving figure layer.
, can be with it will be understood by those skilled in the art that mapped by the coordinate of the scene of structure and the operation screen of deviceThe motion track that the motion track of target object is changed on the operation screen of device.In the case where figure layer is selected,Selected figure layer can also be moved according to the motion track on the operation screen of device.So it is achieved that the function of mobile figure layer.
Further, the step 13)The step of human-computer interaction device performs operation, can also include:143)For choosingThe step of icon.
Further, the step 133)The step of for choosing icon, including:
Step 1331)The step of for setting the second time span threshold value;
Step 1332)According to the position relationship information, judge whether the object current location is relative with icon areaShould, if corresponding, perform step 1333);If be not correspond to, the associative operation for choosing icon is not performed;
Step 1333)According to the position relationship information, record the object rest on it is described relative with icon areaThe time span for the position answered;
Step 1334)Compare the magnitude relationship of the time value and the second time span threshold value of setting, when the timeWhen value is more than the second time span threshold value, the operation for choosing the icon is performed.
It will be understood by those skilled in the art that the operating habit of different user is not quite similar, so corresponding optimal secondTime span threshold value also usually requires to set different numerical value according to the operating habit of different user.By setting for the second time longThreshold value is spent, adaptation of the human-computer interaction device to the operating habit of different user is realized.The operation screen of human-computer interaction device leads toOften it is divided into icon area and non-icon region, user can perform the function corresponding to icon by choosing icon.Pass through structureThe scene and the operation screen of device built carry out coordinate mapping, can obtain position of the target object in the operation screen of devicePut.If the position corresponding to target object in the operation screen of device is icon area, start recording object is stoppedTime span in the region.If object residence time length is more than the second time span threshold value of setting, execution is chosenThe operation of the icon.So it is achieved that the step of choosing icon.
Further, the step 13)The step of human-computer interaction device performs operation, in step 133)Afterwards, it can also wrapInclude:
Step 134)The step of for moving icon.
, can be with it will be understood by those skilled in the art that mapped by the coordinate of the scene of structure and the operation screen of deviceThe motion track that the motion track of target object is changed on the operation screen of device.In the case where icon is selected,Selected icon can also be moved according to the motion track on the operation screen of device.So it is achieved that the function of moving icon.
It will be understood by those skilled in the art that user by operating realization to choose, mobile figure layer, choose, moving iconFunction when need not contact human-computer interaction device, this contactless interaction not only improves Consumer's Experience, and avoidDue to long-time touch keyboard, mouse or screen in the terminal use of contact man-machine interaction mode, what wrist etc. occurredStiff pain, numbness, spasm etc..
Further, the step 13)The step of human-computer interaction device performs operation, can also include:
Step 135)The step of for being controlled in the scene.
Further, the step 135)Include successively:
Step 1351)The step of for building scene;
Step 1352)The step of for building target object model;
Step 1353)The step of spatial relation for setting up target object model and scene.
Further, the scene is stereo scene.
It will be understood by those skilled in the art that with the maturation of 3D technology, the digitlization of many scenes starts to obtain more nextWider application.Such stereo scene has 3D scene of game, 3D video conference rooms, 3D design offices.By the step for building sceneSuddenly, target object model is built, and sets up target object model and the spatial relation of scene on this basis, can be meshThe mark static or motion track that object is static in the scene of structure or motion track is changed into stereo scene, so as to realizeControl to stereo scene.
Further, the step 13)The step of human-computer interaction device performs operation, can also include:
Step 136)The step of for performing shortcut.
Further, the step 136)The step of for performing shortcut, including:
Step 1361)For recognizing target object in the scene of structure the step of static or motion track.
Further, the step 136)The step of for performing shortcut, including:
Step 1362)For recognize target object in the scene of structure model and its change the step of.
It will be understood by those skilled in the art that in the stereo scene being inactive state or fortune by monitoring objective objectDynamic state, and the model change in the direction of motion, the information of speed and acceleration and target object information, when the information withWhen shortcut setting matches, the order corresponding to shortcut is performed, the function of performing shortcut is so achieved that.For example, the device control command that the motion of the picture fork of handle is shut down with device is mutually bound, when user makes picture fork with hand againDuring action, device meeting automatic identification order simultaneously performs shutdown.In another example, handle pinches the action of fist by palm and device shuts downDevice control command mutually bind, when recognizing that model in the scene in one's hands turns to fist by palm deformation, device can be fromIt is dynamic to recognize the order and perform shutdown.
Further, the method for described man-machine interaction, in addition to:
Step 15)The step of shortcut is set.
So it is achieved that the setting of shortcut.It is inactive state by recording target object in the stereo sceneOr motion state, and motion direction, the information of the model change of the information of speed and acceleration or target object, and thisInformation is mutually bound with a certain device control command, when target object is again with the state or mesh of same or like static or motionWhen the model change of mark object is recognized by device, device is just automatic to perform corresponding device control command, is so achieved thatThe setting of man-machine interaction shortcut.For example, by setting, the device control command of the motion and device shutdown of the picture fork of handleMutually bind, when user makes the action of picture fork with hand again, device meeting automatic identification order simultaneously performs shutdown.In another example,By setting, the device control command that the action that handle pinches fist by palm is shut down with device is mutually bound, in one's hands on the scene when recognizingWhen model in scape turns to fist by palm deformation, device meeting automatic identification order simultaneously performs shutdown.
Embodiment three
A kind of man-machine friendship of the method for man-machine interaction described in the offer of the present embodiment three implementation embodiment one and embodiment twoMutual device, including:
Identification module 101, for recognizing target object and instruction being sent when recognizing target object;
Detecting module 102, for when receiving the instruction that the identification module 101 is sent, detecting the target objectPosition relationship information between the human-computer interaction device is simultaneously transmitted;
Interactive information processing module 103, is controlled for being handled the interactive information and being sent according to resultInstruction;
Performing module 104, the control instruction for being sent according to the interactive information processing module 103 performs operation.
It will be understood by those skilled in the art that can so realize that user is moved using object before the interactive device,To control the human-computer interaction device to perform the function of operation.Figure layer, choosing are chosen as described in embodiment one and embodiment twoMiddle icon, mobile figure layer, the operation of moving icon.User by keyboard or touch-screen without inputting control instruction, with being controlledHuman-computer interaction device do not produce contact, reduce the mechanical wear of controlled human-computer interaction device.
Further, the performing module 104, including:
Modeling unit 1041, for according to the position relationship information architecture scene;
Display unit 1042, for showing the scene.
So, scene and position relationship that can be residing for simulative display target object and the human-computer interaction device, make userControl of the target object to the human-computer interaction device is more intuitively observed, ease for use is stronger.
Further, the detecting module 102 includes multiple distance measurement elements, and the quantity of distance measurement element is at least threeIt is individual.So, the three-dimensional position relation between the target object and the human-computer interaction device can be detected from different directions.It is describedModeling unit 1041 can build three-dimensional scenic and be shown by display unit 1042.
Further, the performing module 104 includes alarm set, for what is sent according to interactive information processing module 103Control instruction, which is performed, reminds operation.
It will be understood by those skilled in the art that the interactive information processing module 103 can be realized when position relationship letterThe instruction for controlling prompting operation is sent when breath is less than the threshold value set.Such as when the eyes and the human-computer interaction device of userThe distance between be less than setting threshold value when, perform remind operation, so as to point out user to note.The threshold value can be according to user'sNeed to set.
The present apparatus can also be realized:The function of device control is realized with the shortcut recorded in advance.At interactive informationIt is inactive state or motion state in the stereo scene that reason module 103, which records target object, and motion direction, speedThe information changed with the information of acceleration or the model of target object, mutually binds the information with a certain device control command.WhenTarget object is changed by distance measurement element with the model of same or like static or motion state or target object againDuring identification, device is just automatic to perform corresponding device control command.So it is achieved that the setting of man-machine interaction shortcutWith the function that device control is realized with the shortcut recorded in advance.For example, by setting, the motion of the picture fork of handle and deviceThe device control command of shutdown is mutually bound, when user makes the action of picture fork with hand again, the order of device meeting automatic identificationAnd perform shutdown.In another example, by setting, the device control command that the action that handle pinches fist by palm is shut down with device is mutually tied upFixed, when recognizing that model in the scene in one's hands turns to fist by palm deformation, device meeting automatic identification order is simultaneously performedShutdown.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;AlthoughThe present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It still may be usedTo be modified to the technical scheme described in foregoing embodiments, or equivalent substitution is carried out to which part technical characteristic;And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit andScope.