Movatterモバイル変換


[0]ホーム

URL:


CN103376884B - Man-machine interaction method and its device - Google Patents

Man-machine interaction method and its device
Download PDF

Info

Publication number
CN103376884B
CN103376884BCN201210117974.9ACN201210117974ACN103376884BCN 103376884 BCN103376884 BCN 103376884BCN 201210117974 ACN201210117974 ACN 201210117974ACN 103376884 BCN103376884 BCN 103376884B
Authority
CN
China
Prior art keywords
target object
man
icon
human
position relationship
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210117974.9A
Other languages
Chinese (zh)
Other versions
CN103376884A (en
Inventor
武寿昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI KUYU COMMUNICATION TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI KUYU COMMUNICATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI KUYU COMMUNICATION TECHNOLOGY Co LtdfiledCriticalSHANGHAI KUYU COMMUNICATION TECHNOLOGY Co Ltd
Priority to CN201210117974.9ApriorityCriticalpatent/CN103376884B/en
Publication of CN103376884ApublicationCriticalpatent/CN103376884A/en
Application grantedgrantedCritical
Publication of CN103376884BpublicationCriticalpatent/CN103376884B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

Include successively the invention discloses a kind of man-machine interaction method and its device:Step 11)The step of for recognizing target object;Step 12)For detecting the target object, so that the step of obtaining the position relationship information between the target object and human-computer interaction device;Step 13)For according to the step of the position relationship information architecture scene;Step 13)The step of operating is performed for human-computer interaction device.Human-computer interaction device provided by the present invention, including:Identification module, detecting module, interactive information processing module and performing module.Method provided by the present invention can realize man-machine non-contact control, it is to avoid due to long-time touch keyboard, mouse or screen in the terminal use of contact man-machine interaction mode, the symptom such as the stiff pain, numbness, the spasm that occur such as wrist.

Description

Man-machine interaction method and its device
Technical field
The present invention relates to telecommunications field, more particularly to human-computer interaction technique field.
Background technology
With the development of science and technology, the device such as mobile phone, PC is widely used, mobile phone, PC it is man-machineInteractive input mode also from keyboard is tapped, develops into click mouse, and developed touch screen on this basis.China Intellectual Property Office disclosed a kind of man-machine interaction scheme based on touch-screen, Publication No. CN on 2 27th, 2008101133385A.This programme discloses a kind of handheld device with multiple touch sensing devices.But this scheme and other existing skillsArt scheme, either still touches the man-machine interaction scheme of screen based on keyboard, mouse, all rests on the computer screen of control two dimensionIn curtain image.So long-time touch keyboard, mouse or screen, wrist can be due to needing keyboard, mouse or screen to be maintained at oneFixed height, and necessary dorsiflex certain angle, it is impossible to stretch naturally, forefinger and the stiff pain of middle finger, numbness can be caused for a long timeWith thumb muscles powerlessness, serious meeting causes the symptom such as wrist muscle or joint paralysis, swelling, pain, spasm.
The content of the invention
It is an object of the invention to provide a kind of man-machine interaction method and its device, man-machine non-contact control can be achieved.
Man-machine interaction method provided by the present invention, includes successively:
Step 11)The step of for recognizing target object;
Step 12)For detecting the target object, so as to obtain between the target object and human-computer interaction deviceThe step of position relationship information;
Step 13)The step of operating is performed for human-computer interaction device.
Human-computer interaction device provided by the present invention, including:
Identification module 101, for recognizing target object and instruction being sent when recognizing target object;
Detecting module 102, for when receiving the instruction that the identification module 101 is sent, detecting the target objectPosition relationship information between the human-computer interaction device is simultaneously transmitted;
Interactive information processing module 103, for being handled the position relationship information and being sent according to resultControl instruction;
Performing module 104, the control instruction for being sent according to the interactive information processing module 103 performs operation.
Man-machine interaction method and its device provided by the present invention, can be achieved man-machine contactless interaction, are not only liftedConsumer's Experience, and avoid in the terminal use of contact man-machine interaction mode due to long-time touch keyboard, mouseOr screen, the stiff pain, numbness, the spasm that occur such as wrist etc..
Brief description of the drawings
Fig. 1 chooses the flow chart of figure layer for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 2 chooses the flow chart of icon for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 3 controls the flow chart of scene for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 4 sets for the method in the embodiment of the present invention one using the man-machine interaction and performs the flow of shortcutFigure;
Fig. 5 reminds the flow chart of eye distance for the method in the embodiment of the present invention one using the man-machine interaction;
Fig. 6 is the structure chart of the device of man-machine interaction described in the embodiment of the present invention three.
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present inventionIn accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is thisInvent a part of embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art existThe every other embodiment obtained under the premise of creative work is not made, the scope of protection of the invention is belonged to.
Embodiment one
The present embodiment one provides a kind of method of man-machine interaction, includes successively:
Step 11)The step of for recognizing target object;
It will be understood by those skilled in the art that the target object refers to the thing that interactive relation is set up with human-computer interaction deviceBody;For example, there is the first object, the second object and third body in space, and only the second object is built with the interactive deviceGrade separation mutual relation, then second object is target object.
Step 12)For detecting the target object, so as to obtain between the target object and human-computer interaction deviceThe step of position relationship information;
It will be understood by those skilled in the art that be can determine whether out according to the position relationship information relative to the man-machine interactionDevice, the target object is inactive state or motion state, and motion direction, speed and acceleration.The scene isRefer to by the position relationship information, the position relationship information of target object, target object of human-computer interaction device be inactive state orMotion state, and motion direction, the information of speed and acceleration, the number for utilizing two dimension or 3-D graphic generation technique to set upWord scene.For the digitlization scene of two dimension, target is detected by the distance measurement element of two or more diverse location simultaneouslyThe distance of object and device, can calculate position coordinates of the target object relative to device.For three-dimensional digitlization scene,By the distance measurement element of more than three diverse locations while the distance of detecting objects body and device, can calculate targetPosition coordinates of the object relative to device.So it is achieved that identification and function of the detecting object from the position of mobile phone and formation instituteState the model of target object and build the function of scene.
Step 13)The step of operating is performed for human-computer interaction device.
It will be understood by those skilled in the art that by recognizing target object, then detecting objects body phase for man-machine firstThe position relationship information of interactive device, the position relationship information obtained according to detection sets up the model of target object and builds fieldScape.Perpetually iterate over above-mentioned steps, can obtain relative to the human-computer interaction device, the target object be inactive state orMotion state, and motion direction, the information of speed and acceleration.According to target object of the object in the stereo sceneIt is inactive state or motion state, and direction, speed and the acceleration moved performs corresponding operation, it is achieved thereby that non-connectThe function of tactile man-machine interaction.
Further, the step 12)It can also include:
Step 121)The step of position relationship information for detecting objects body profile.
It will be understood by those skilled in the art that substituting object with geometric figure in digitlization scene.Pass through objectThe position relationship information of body profile, using two dimension or 3-D graphic generation technique, can build geometric figure, in the sceneSubstitute target object.The geometric figure is the model of target object.So it is achieved that structure target object mould in the sceneThe function of type.
Further, the step 13)The step of human-computer interaction device performs operation, including:
Step 131)The step of for choosing figure layer.
Further, the step 131)The step of for choosing figure layer, including:
Step 1311)The step of very first time length threshold is set;
Step 1312)According to the position relationship information, judge the object current location whether with non-selected figure layerNon-icon region is corresponding, if corresponding, performs step 1313);If be not correspond to, do not perform for choosing figure layerAssociative operation;
Step 1313)According to the position relationship information, record the object and rest on and described and non-selected figure layerThe time span of the corresponding position in non-icon region;
Step 1314)Compare the time value and the magnitude relationship of the very first time length threshold of setting, when the timeWhen value is more than the very first time length threshold, the operation for choosing the figure layer is performed.
It will be understood by those skilled in the art that the operating habit of different user is not quite similar, so corresponding optimal firstTime span threshold value also usually requires to set different numerical value according to the operating habit of different user.By setting the very first time longThreshold value is spent, adaptation of the human-computer interaction device to the operating habit of different user is realized.The operation screen of human-computer interaction device leads toOften it is divided into icon area and non-icon region, user can perform the function corresponding to icon by choosing icon.Man-machine friendshipThe operation screen of mutual device is generally also provided with more than one figure layer, and user can realize different figures by choosing figure layer thereinSwitching between layer.Mapped by the coordinate of the scene of structure and the operation screen of device, target object can be obtained in deviceOperation screen in position.If the position corresponding to target object in the operation screen of device is non-selected figure layerNon-icon region, then start recording object rest on the time span in the region.Set if object residence time length is more thanFixed very first time length threshold, performs the operation for choosing the figure layer.So it is achieved that the step of choosing figure layer.
Further, the step 13)The step of human-computer interaction device performs operation, in step 131)Afterwards, it can also wrapInclude:
Step 132)The step of for moving figure layer.
, can be with it will be understood by those skilled in the art that mapped by the coordinate of the scene of structure and the operation screen of deviceThe motion track that the motion track of target object is changed on the operation screen of device.In the case where figure layer is selected,Selected figure layer can also be moved according to the motion track on the operation screen of device.So it is achieved that the function of mobile figure layer.
Further, the step 13)The step of human-computer interaction device performs operation, can also include:143)For choosingThe step of icon.
Further, the step 133)The step of for choosing icon, including:
Step 1331)The step of for setting the second time span threshold value;
Step 1332)According to the position relationship information, judge whether the object current location is relative with icon areaShould, if corresponding, perform step 1333);If be not correspond to, the associative operation for choosing icon is not performed;
Step 1333)According to the position relationship information, record the object rest on it is described relative with icon areaThe time span for the position answered;
Step 1334)Compare the magnitude relationship of the time value and the second time span threshold value of setting, when the timeWhen value is more than the second time span threshold value, the operation for choosing the icon is performed.
It will be understood by those skilled in the art that the operating habit of different user is not quite similar, so corresponding optimal secondTime span threshold value also usually requires to set different numerical value according to the operating habit of different user.By setting for the second time longThreshold value is spent, adaptation of the human-computer interaction device to the operating habit of different user is realized.The operation screen of human-computer interaction device leads toOften it is divided into icon area and non-icon region, user can perform the function corresponding to icon by choosing icon.Pass through structureThe scene and the operation screen of device built carry out coordinate mapping, can obtain position of the target object in the operation screen of devicePut.If the position corresponding to target object in the operation screen of device is icon area, start recording object is stoppedTime span in the region.If object residence time length is more than the second time span threshold value of setting, execution is chosenThe operation of the icon.So it is achieved that the step of choosing icon.
Further, the step 13)The step of human-computer interaction device performs operation, in step 133)Afterwards, it can also wrapInclude:
Step 134)The step of for moving icon.
, can be with it will be understood by those skilled in the art that mapped by the coordinate of the scene of structure and the operation screen of deviceThe motion track that the motion track of target object is changed on the operation screen of device.In the case where icon is selected,Selected icon can also be moved according to the motion track on the operation screen of device.So it is achieved that the function of moving icon.
It will be understood by those skilled in the art that user by operating realization to choose, mobile figure layer, choose, moving iconFunction when need not contact human-computer interaction device, this contactless interaction not only improves Consumer's Experience, and avoidDue to long-time touch keyboard, mouse or screen in the terminal use of contact man-machine interaction mode, what wrist etc. occurredStiff pain, numbness, spasm etc..
Further, the step 13)The step of human-computer interaction device performs operation, can also include:
Step 135)The step of for being controlled in the scene.
Further, the step 135)Include successively:
Step 1351)The step of for building scene;
Step 1352)The step of for building target object model;
Step 1353)The step of spatial relation for setting up target object model and scene.
Further, the scene is stereo scene.
It will be understood by those skilled in the art that with the maturation of 3D technology, the digitlization of many scenes starts to obtain more nextWider application.Such stereo scene has 3D scene of game, 3D video conference rooms, 3D design offices.By the step for building sceneSuddenly, target object model is built, and sets up target object model and the spatial relation of scene on this basis, can be meshThe mark static or motion track that object is static in the scene of structure or motion track is changed into stereo scene, so as to realizeControl to stereo scene.
Further, the step 13)The step of human-computer interaction device performs operation, can also include:
Step 136)The step of for performing shortcut.
Further, the step 136)The step of for performing shortcut, including:
Step 1361)For recognizing target object in the scene of structure the step of static or motion track.
Further, the step 136)The step of for performing shortcut, including:
Step 1362)For recognize target object in the scene of structure model and its change the step of.
It will be understood by those skilled in the art that in the stereo scene being inactive state or fortune by monitoring objective objectDynamic state, and the model change in the direction of motion, the information of speed and acceleration and target object information, when the information withWhen shortcut setting matches, the order corresponding to shortcut is performed, the function of performing shortcut is so achieved that.For example, the device control command that the motion of the picture fork of handle is shut down with device is mutually bound, when user makes picture fork with hand againDuring action, device meeting automatic identification order simultaneously performs shutdown.In another example, handle pinches the action of fist by palm and device shuts downDevice control command mutually bind, when recognizing that model in the scene in one's hands turns to fist by palm deformation, device can be fromIt is dynamic to recognize the order and perform shutdown.
Further, the method for described man-machine interaction, in addition to:
Step 15)The step of shortcut is set.
So it is achieved that the setting of shortcut.It is inactive state by recording target object in the stereo sceneOr motion state, and motion direction, the information of the model change of the information of speed and acceleration or target object, and thisInformation is mutually bound with a certain device control command, when target object is again with the state or mesh of same or like static or motionWhen the model change of mark object is recognized by device, device is just automatic to perform corresponding device control command, is so achieved thatThe setting of man-machine interaction shortcut.For example, by setting, the device control command of the motion and device shutdown of the picture fork of handleMutually bind, when user makes the action of picture fork with hand again, device meeting automatic identification order simultaneously performs shutdown.In another example,By setting, the device control command that the action that handle pinches fist by palm is shut down with device is mutually bound, in one's hands on the scene when recognizingWhen model in scape turns to fist by palm deformation, device meeting automatic identification order simultaneously performs shutdown.
Embodiment two
The present embodiment two provides a kind of method of man-machine interaction, includes successively:
Step 21)The step of for identifying eyes;
Step 22)For detecting the step of eyes are from current distance L between described device;
Step 23)For judge it is described apart from L whether be less than setting threshold value L0, when it is described apart from L be less than the threshold valueDuring LO, step 24 is performed);
Step 24)The step of for pointing out user.
It will be understood by those skilled in the art that being filled by recognizing human eye first and detecting eyes relative to man-machine interactionThe position relationship information put.The current threshold value L0 for whether being less than setting apart from L of contrast, if current is less than setting apart from LThreshold value L0, then remind user to keep the distance of eyes and device, be so achieved that a kind of non-contacting prompting user keepsBetween the device function of distance, it is to avoid user with eye with excessively causing kopiopia even visual impairment.
Further, the method for described man-machine interaction, in addition to:
Step 25)The step of for setting threshold value L0 numerical value.
It will be understood by those skilled in the art that because different people height situation is different, eyesight status, ring during use deviceBorder is also different, and the optimal values of prompting distance in different situations between eyes and device are different.By settingL0 threshold values, realize adaptation of the human-computer interaction device to the operating habit of different user.
Embodiment three
A kind of man-machine friendship of the method for man-machine interaction described in the offer of the present embodiment three implementation embodiment one and embodiment twoMutual device, including:
Identification module 101, for recognizing target object and instruction being sent when recognizing target object;
Detecting module 102, for when receiving the instruction that the identification module 101 is sent, detecting the target objectPosition relationship information between the human-computer interaction device is simultaneously transmitted;
Interactive information processing module 103, is controlled for being handled the interactive information and being sent according to resultInstruction;
Performing module 104, the control instruction for being sent according to the interactive information processing module 103 performs operation.
It will be understood by those skilled in the art that can so realize that user is moved using object before the interactive device,To control the human-computer interaction device to perform the function of operation.Figure layer, choosing are chosen as described in embodiment one and embodiment twoMiddle icon, mobile figure layer, the operation of moving icon.User by keyboard or touch-screen without inputting control instruction, with being controlledHuman-computer interaction device do not produce contact, reduce the mechanical wear of controlled human-computer interaction device.
Further, the performing module 104, including:
Modeling unit 1041, for according to the position relationship information architecture scene;
Display unit 1042, for showing the scene.
So, scene and position relationship that can be residing for simulative display target object and the human-computer interaction device, make userControl of the target object to the human-computer interaction device is more intuitively observed, ease for use is stronger.
Further, the detecting module 102 includes multiple distance measurement elements, and the quantity of distance measurement element is at least threeIt is individual.So, the three-dimensional position relation between the target object and the human-computer interaction device can be detected from different directions.It is describedModeling unit 1041 can build three-dimensional scenic and be shown by display unit 1042.
Further, the performing module 104 includes alarm set, for what is sent according to interactive information processing module 103Control instruction, which is performed, reminds operation.
It will be understood by those skilled in the art that the interactive information processing module 103 can be realized when position relationship letterThe instruction for controlling prompting operation is sent when breath is less than the threshold value set.Such as when the eyes and the human-computer interaction device of userThe distance between be less than setting threshold value when, perform remind operation, so as to point out user to note.The threshold value can be according to user'sNeed to set.
The present apparatus can also be realized:The function of device control is realized with the shortcut recorded in advance.At interactive informationIt is inactive state or motion state in the stereo scene that reason module 103, which records target object, and motion direction, speedThe information changed with the information of acceleration or the model of target object, mutually binds the information with a certain device control command.WhenTarget object is changed by distance measurement element with the model of same or like static or motion state or target object againDuring identification, device is just automatic to perform corresponding device control command.So it is achieved that the setting of man-machine interaction shortcutWith the function that device control is realized with the shortcut recorded in advance.For example, by setting, the motion of the picture fork of handle and deviceThe device control command of shutdown is mutually bound, when user makes the action of picture fork with hand again, the order of device meeting automatic identificationAnd perform shutdown.In another example, by setting, the device control command that the action that handle pinches fist by palm is shut down with device is mutually tied upFixed, when recognizing that model in the scene in one's hands turns to fist by palm deformation, device meeting automatic identification order is simultaneously performedShutdown.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;AlthoughThe present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It still may be usedTo be modified to the technical scheme described in foregoing embodiments, or equivalent substitution is carried out to which part technical characteristic;And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit andScope.

Claims (4)

CN201210117974.9A2012-04-222012-04-22Man-machine interaction method and its deviceExpired - Fee RelatedCN103376884B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201210117974.9ACN103376884B (en)2012-04-222012-04-22Man-machine interaction method and its device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201210117974.9ACN103376884B (en)2012-04-222012-04-22Man-machine interaction method and its device

Publications (2)

Publication NumberPublication Date
CN103376884A CN103376884A (en)2013-10-30
CN103376884Btrue CN103376884B (en)2017-08-29

Family

ID=49462108

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201210117974.9AExpired - Fee RelatedCN103376884B (en)2012-04-222012-04-22Man-machine interaction method and its device

Country Status (1)

CountryLink
CN (1)CN103376884B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103577025B (en)*2013-10-312016-05-11中国电子科技集团公司第四十一研究所A kind of unitized processing method of instrument man-machine interaction
CN105353871B (en)*2015-10-292018-12-25上海乐相科技有限公司The control method and device of target object in a kind of virtual reality scenario
CN108932062B (en)*2017-05-282021-09-21姚震Electronic device, input device control method
CN113129340B (en)*2021-06-152021-09-28萱闱(北京)生物科技有限公司Motion trajectory analysis method and device for operating equipment, medium and computing equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101893934A (en)*2010-06-252010-11-24宇龙计算机通信科技(深圳)有限公司Method and device for intelligently adjusting screen display
CN101918908A (en)*2007-09-282010-12-15阿尔卡特朗讯Method for determining user reaction with specific content of a displayed page

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101918908A (en)*2007-09-282010-12-15阿尔卡特朗讯Method for determining user reaction with specific content of a displayed page
CN101893934A (en)*2010-06-252010-11-24宇龙计算机通信科技(深圳)有限公司Method and device for intelligently adjusting screen display

Also Published As

Publication numberPublication date
CN103376884A (en)2013-10-30

Similar Documents

PublicationPublication DateTitle
US8866781B2 (en)Contactless gesture-based control method and apparatus
US9244544B2 (en)User interface device with touch pad enabling original image to be displayed in reduction within touch-input screen, and input-action processing method and program
KR101919169B1 (en)Using movement of a computing device to enhance interpretation of input events produced when interacting with the computing device
KR101861395B1 (en)Detecting gestures involving intentional movement of a computing device
US9696882B2 (en)Operation processing method, operation processing device, and control method
US10180714B1 (en)Two-handed multi-stroke marking menus for multi-touch devices
KR20160003031A (en)Simulation of tangible user interface interactions and gestures using array of haptic cells
WO2012112277A1 (en)Breath-sensitive digital interface
CN104866097B (en)The method of hand-held signal output apparatus and hand-held device output signal
CN103376884B (en)Man-machine interaction method and its device
RekimotoOrganic interaction technologies: from stone to skin
WO2018042923A1 (en)Information processing system, information processing method, and program
TWI471792B (en)Method for detecting multi-object behavior of a proximity-touch detection device
Watanabe et al.Generic method for crafting deformable interfaces to physically augment smartphones
JP2015053034A (en)Input device
CN103885696A (en)Information processing method and electronic device
CN104951211A (en)Information processing method and electronic equipment
CN204740560U (en) Handheld Signal Output Device
CN108008819A (en)A kind of page map method and terminal device easy to user's one-handed performance
CN102760031A (en)Display method and display device
TWI483162B (en)Method for detecting multi-object behavior of a proximity-touch detection device
Lee et al.Finger controller: Natural user interaction using finger gestures
Lu et al.Realizing multi-touch-like gestures in 3d space
CN120215710A (en) Interaction method, device, equipment, medium and product
CN103648039A (en)Page switching method and apparatus

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20170829

Termination date:20200422


[8]ページ先頭

©2009-2025 Movatter.jp