Movatterモバイル変換


[0]ホーム

URL:


CN101344816B - Human-machine interaction method and device based on sight tracing and gesture discriminating - Google Patents

Human-machine interaction method and device based on sight tracing and gesture discriminating
Download PDF

Info

Publication number
CN101344816B
CN101344816BCN2008100301944ACN200810030194ACN101344816BCN 101344816 BCN101344816 BCN 101344816BCN 2008100301944 ACN2008100301944 ACN 2008100301944ACN 200810030194 ACN200810030194 ACN 200810030194ACN 101344816 BCN101344816 BCN 101344816B
Authority
CN
China
Prior art keywords
image
screen
finger
people
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008100301944A
Other languages
Chinese (zh)
Other versions
CN101344816A (en
Inventor
秦华标
肖志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUTfiledCriticalSouth China University of Technology SCUT
Priority to CN2008100301944ApriorityCriticalpatent/CN101344816B/en
Publication of CN101344816ApublicationCriticalpatent/CN101344816A/en
Application grantedgrantedCritical
Publication of CN101344816BpublicationCriticalpatent/CN101344816B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

The invention discloses a human-computer interaction method and a device based on vision follow-up and gesture identification. The method comprises the following steps of: facial area detection, hand area detection, eye location, fingertip location, screen location and gesture identification. A straight line is determined between an eye and a fingertip; the position where the straight line intersects with the screen is transformed into the logic coordinate of the mouse on the screen, and simultaneously the clicking operation of the mouse is simulated by judging the pressing action of the finger. The device comprises an image collection module, an image processing module and a wireless transmission module. First, the image of a user is collected at real time by a camera and then analyzed and processed by using an image processing algorithm to transform positions the user points to the screen and gesture changes into logic coordinates and control orders of the computer on the screen; and then the processing results are transmitted to the computer through the wireless transmission module. The invention provides a natural, intuitive and simple human-computer interaction method, which can realize remote operation of computers.

Description

Man-machine interaction method and device based on eye tracking and gesture identification
Technical field
The present invention relates to the human-computer interaction technology in the virtual reality system, specifically is a kind of man-machine interaction method and device based on eye tracking and gesture identification.
Background technology
Along with rapid development of computer technology, the interacting activity of people and computing machine becomes an important component part of people's daily life gradually.There is certain limitation in traditional human-computer interaction device such as mouse, keyboard at aspects such as naturality of using and friendly, and therefore studying the human-computer interaction technology that meets the interpersonal communication custom becomes current development trend.
All have naturality, substantivity and terseness based on eye tracking with based on the man-machine interaction mode of gesture identification.Human-computer interaction technology based on eye tracking mainly obtains the position that the user watches attentively by the rotation information of obtaining eyeball, and then realizes the control to computing machine.It mainly is divided into contact and contactless two classes.The matching requirements user of contact wears special equipment to detect eyeball information, and this will bring very big interference to the user.Contactless device resolution precision is limited, and the user must be apart from camera nearer and head deflection can not be excessive.Human-computer interaction technology based on gesture identification mainly changes by identification user's gesture, judges the operation that the user need carry out.It is divided into based on data glove with based on computer vision two classes.There are shortcomings such as burdensome, that motion is dumb in method based on data glove.Can not directly locate by the sensing of finger based on the method for computer vision, must change by specific gesture and obtain relative position, and require the user must be nearer apart from camera.So, traditional all there is certain defective based on eye tracking with based on the human-computer interaction technology of gesture identification, can not well solve remote mutual between man-machine, for example people need the remote-controlled operation computing machine when utilizing the giant-screen speech.
Summary of the invention
The objective of the invention is to overcome the above-mentioned defective that prior art exists, man-machine interaction method and device based on eye tracking and gesture identification are provided, can realize the remote-controlled operation computing machine.The user needn't carry other any equipment, only needs finger by nature to point to and click action can realize control to computing machine.Of the present invention being achieved through the following technical solutions.
Man-machine interaction method based on eye tracking and gesture identification comprises step: the location of the detection in the detection of human face region, hand zone, the location of human eye, finger tip, screen location and gesture identification.
Described screen location comprises: when the user stretched out a finger and points to screen, facial image that system collects according to image collecting device and hand image-region area size calculated human eye and the finger tip distance to screen; Coordinate conversion in image is human eye and the coordinate of finger tip in the three-dimensional coordinate system that is changed to initial point with image collector with human eye and finger tip; By human eye and 2 definite straight lines of finger tip, the point that this straight line and screen intersect is exactly the position that the user points to screen, calculates the coordinate of this position in described three-dimensional coordinate system according to the proportionate relationship of people's eye coordinates and finger tip coordinate; According to the size of screen, be the logical coordinates of mouse on screen with the coordinate transformation of described position;
The clicking operation of described gesture identification by judging that the finger click action is come analog mouse pointed to screen and when mobile, will be considered as mouse moving when the user stretches out finger of the right hand; When this right finger closes for the first time, will be considered as pressing left mouse button; To be considered as pin left button rolling mouse if right finger is stretched out sensing screen and mobile this moment; When this right finger closes once more, will be considered as having discharged left button; Point to screen when the user stretches out finger of left hand, when then left-hand finger being closed, will be considered as pressing right mouse button; After left-hand finger is stretched out, will be considered as discharging right mouse button.
In the said method, whether the detection step of described human face region comprises: have people's face to exist in the image by judging based on Adaboost people's face detection algorithm of class rectangular characteristic, the integrogram of computed image at first, extract the class rectangular characteristic, according to the sorter feature database that has trained, the method for utilization Cascade cascade is searched for human face region in image.The training step of described sorter feature database comprises: calculate the integrogram of sample image, extract the class rectangular characteristic of sample image; Screen effective feature according to the Adaboost algorithm, constitute Weak Classifier; By making up a plurality of Weak Classifiers, constitute strong classifier; The a plurality of strong classifiers of cascade form the sorter feature database that people's face detects.
In the said method, the detection step in described hand zone comprises: according to the features of skin colors of people's face, by the zone that is complementary in the colour of skin matching process searching image; After tentatively cutting apart the zone of selling,, remove background interference according to the area size of connected domain, thereby detect the zone of selling according to the position removal people's face of human face region and the interference region of neck.
In the said method, the positioning step of described human eye comprises: detecting on the basis of human face region, facial image is done horizontal Gray Projection,, on horizontal Gray Projection curve, search for each local smallest point and judge whether to be human eye area then according to the face feature of people's face; After detecting human eye area, with this regional mid point as people's eye coordinates.
In the said method, described finger tip positioning step comprises: at first the image in adversary zone carries out rim detection and grid sample process; Each pixel with the handwheel exterior feature after the sampling is the center, and it is right to choose four pairs of pixels of 4 adjacent pixels formations in counterclockwise and clockwise direction respectively; Calculate the distance variance between every pair of pixel respectively, the mean distance variance is minimum and be fingertip area less than the sampled pixel of threshold value; After detecting fingertip area, with this regional mid point as the finger tip coordinate.
A kind of device of realizing said method, it comprises image capture module, image processing module and wireless transport module, the camera in the image capture module are positioned over screen upper end central authorities, are responsible for gathering user's image and being input in the image processing module; Image processing module is responsible for controlling other two modules, moves various image processing algorithms the user images of gathering is carried out analyzing and processing, the user is pointed to the position of screen and the variation of gesture is converted into logical coordinates and the steering order of computing machine on screen; Wireless transport module comprises receiver module and sending module, and sending module is connected with image processing module, is responsible for result is arrived receiver module by radio signal transmission; Receiver module links to each other with computing machine, is responsible for that result is converted into the mouse control signal and is input in the computing machine.
Described image capture module comprises a camera, and described image processing module comprises a flush bonding processor and peripheral components, and described receiver module and sending module all include radio frequency chip and single-chip microcomputer.
Compared with prior art, the present invention has following advantage and effect: the present invention is a kind of natural, man-machine interaction mode intuitively, and the user need not carry other any equipment, need not to remember complicated operations, only need to point to and click action, can realize control computing machine by the finger of nature; The present invention combines eye tracking and two kinds of technology of gesture identification, solved traditional based on the eye tracking technology with based on needing to wear special equipment in the Gesture Recognition, limited subscriber uses defective freely, a kind of mode of operation simply freely is provided, can be used for the far distance controlled computing machine; Device volume of the present invention is small and exquisite, easy to use, only needs camera is placed on screen upper end central authorities, and wireless communication module is connected to computing machine, can use immediately.
Description of drawings
Fig. 1 is the hardware configuration synoptic diagram in the specific embodiment of the invention.
Fig. 2 is the user mode synoptic diagram in the embodiment of the present invention.
Fig. 3 is the workflow synoptic diagram in the specific embodiment of the invention.
Fig. 4 a and Fig. 4 b be respectively human eye and screen in the three-dimensional coordinate system that with the camera is initial point X-Z and the location diagram in the Y-Z plane.
Fig. 5 is the coordinate setting model according to human eye and finger tip coordinate setting screen.
Embodiment
Below in conjunction with accompanying drawing the specific embodiment of the present invention is described further.
Mainly by image capture module, image processing module and wireless communication module three parts constitute, as shown in Figure 1 based on the man-machine interactive system of eye tracking and gesture identification.Image capture module comprises a camera, is responsible for gathering user's image in real time and being transferred in the image processing module.Image processing module is made up of high performance flush bonding processor and peripheral components, be responsible for other two modules of control, move various image processing algorithms the user images of gathering is carried out analyzing and processing, the user is pointed to the position of screen and the variation of gesture is converted into logical coordinates and the control command of computing machine on screen.Wireless communication module is divided into transmission and receives two parts, and two parts constitute by single-chip microcomputer and radio frequency chip.Wireless communication module is responsible for the result in the images processing module, and this result is converted into the mouse control signal is input in the computing machine.
As shown in Figure 2, user's image is gathered incamera 1 centre position that is positioned over giant-screen 2 upper ends in real time.When the user needs certain position that rolling mouse arrives, only need to point to that this position get final product on the screens byright finger 3,human eye 4 and 2 definite straight lines offinger 3 finger tips, the crossing point of this straight line and screen is exactly the position that the user points to screen; When the user needs the left mouse button operation, only need click action by right finger; When the user needs the right mouse button operation, only need click action by left-hand finger.
The specific embodiment of the present invention flow process as shown in Figure 3.At first, according to people's face detection algorithm this image is analyzed then by the camera collection user images.Currently whether have the user to use this system by whether existing people's face to judge in the detected image, only detect people's face after, just carry out follow-up processing.On the basis that detects people's face, according to the human eye location algorithm, search for the human eye area in this facial image, obtain people's eye coordinates.By the hand detection algorithm image is analyzed, detected the hand zone.On the basis that gets zone in one's hands, utilize the finger tip location algorithm to analyze the image in hand zone, obtain the finger tip coordinate.If system detects zone in one's hands and navigates to finger tip, think that then the user has pointed to screen, system will calculate the coordinate of mouse on screen according to coordinate conversion model and location model.If system detects zone in one's hands but does not navigate to finger tip, represent then that the user points to close, click action has taken place, can obtain the operational order of computing machine by identification user's click action.Behind intact this two field picture of system handles, with result by radio signal transmission to the wireless receiving module of computing machine in, this module is responsible for that result is converted into the mouse control signal and is input in the computing machine.
In the present embodiment, people's face detection algorithm adopts and based on Adaboost people's face detection algorithm of class rectangular characteristic image is analyzed.System is the integrogram of computed image at first, extracts the class rectangular characteristic.Utilize the sorter feature database that has trained then, the method for utilization Cascade cascade is searched for human face region in image.The sorter feature database that native system uses is made of 22 grades of strong classifiers, and each strong classifier is made of several Weak Classifiers again.System at first intercepts all subwindows of 80 * 80 in the entire image, and each subwindow is eliminated non-face subwindow step by step successively by cascade classifier.If have only a subwindow, then determine this window behaviour face window by whole 22 grades of sorters; If any a plurality of subwindows by whole 22 grades of sorters, a plurality of face windows of waiting to choose are carried out adjacent subwindow merge, select beautiful woman's face window.If do not detect the subwindow that meets, then the subwindow size increases progressively with 1.1 times, and detects by cascade classifier again.
In the present embodiment, the training of sorter feature database is an off-line training, needs a large amount of people's face and non-face sample.Because human face region is generally square area, so at first choose the square pixels zone of sample image, calculates this regional integrogram and class rectangular characteristic.Carry out the screening of feature then according to the AdaBoost algorithm, select effective feature and sets of threshold values to become Weak Classifier, constitute strong classifier by making up a plurality of Weak Classifiers.According to the method for Cascade cascade, as the feature of first strong classifier, the strong classifier that more features are formed detects sorter as further detecting by people's face of a plurality of strong classifiers formations of cascade then with the most tangible two features in people's face.
In the present embodiment, the human eye location algorithm adopts and locatees human eye based on the method for face feature.Smoothing processing is at first done to facial image by system on the basis that detects people's face, then this image is done horizontal Gray Projection.According to the face feature of people's face, can find that there are a plurality of local smallest point in this grey scale curve, is respectively eyebrow, eyes, nostril and face.After so system at first is assumed to position of human eye with second local smallest point of grey scale curve, this point respectively getimage 1/20 up and down as candidate's eye zone, and confirm according to following feature.(1) by the position of human eye as can be known, this part smallest point must be positioned at the first half of people's face.(2) because eyebrow is littler to the distance in nostril than eyes to the distance of eyes, therefore obtain second local smallest point of image after, the distance of itself and previous local smallest point will be less than the distance of itself and a back local smallest point.(3) horizontal projection is done in the candidate's eye zone to obtaining, and drop shadow curve need satisfy a crest shape, during two troughs.Have only and satisfy above three conditions and just can be defined as human eye area, otherwise adjust candidate's eye zone.After confirming as human eye area, and with this regional mid point as people's eye coordinates.
In the present embodiment, the hand detection algorithm adopts the method for colour of skin coupling tentatively to cut apart the zone of selling.On the basis that detects people's face, at first get human eye below size and be 20 * 20 rectangle frame sample area of skin color as people's face, calculate Y, Cb, the Cr mean value of 400 pixels in this rectangle frame.With the Y in the sample area of skin color, Cb, Cr mean value is the center, and plus-minus 10 mates with each pixel in the image respectively as last lower threshold value.Satisfy the zone of this two field picture complexion model, promptly be judged as skin pixels, after coupling is finished, can tentatively cut apart the zone of selling.According to detected human face region, the interference that can remove people's face; According to the geometry site of people's face and neck, get under people's face, wide for people's face twice, height and people's appearance rectangle together is a neck area, can the interference of removing neck under the situation of deflection be arranged at people's face.Because the area of hand is much larger than the area in background interference zone, so can judge whether to be background interference according to connected domain area size, area will be regarded as background interference and remove the zone thereby detection is sold less than the connected region of threshold value.
In the present embodiment, the finger tip location algorithm is the feature location fingertip area according to the handwheel exterior feature.System at first utilizes gradient operator adversary area image to carry out rim detection, gets profile in one's hands.Adopt grid sampling adversary profile to handle then, promptly one 10 * 10 zone is only represented with a pixel in the original image.Each sampled pixel with the handwheel exterior feature after the sampling is the center, chooses 4 adjacent pixels in counterclockwise and clockwise direction respectively, and constituting with sampled pixel is that centrosymmetric four pairs of pixels are right.Calculate the distance variance between every pair of pixel respectively, the mean distance variance is minimum and be fingertip area less than the sampled pixel of threshold value.After obtaining fingertip area, with this regional mid point as the finger tip coordinate.
In the present embodiment, it is as follows to determine that by the coordinate of human eye in the image and finger tip the user points on the screen method of position:
(1) be that true origin is set up a three-dimensional coordinate system with the camera.
(2) after system detects human face region and hand zone in the image, according to the area in human face region and hand zone, calculate human eye and finger tip to screen apart from dEAnd dF
(3) by human eye location and finger tip location, available human eye and the finger tip coordinate in image is respectively (xE_image, yE_image), (xF_image, yF_image).In order to determine that the user points to the position of screen by the line of human eye and finger tip, need be with human eye and finger tip the coordinate transformation in image for the camera being the coordinate in the three-dimensional coordinate system of initial point.Fig. 4 a and Fig. 4 b be respectively human eye and screen in the three-dimensional coordinate system that with the camera is initial point X-Z and the location diagram in the Y-Z plane.Because there is proportionate relationship in coordinate and the human eye of human eye in image at the coordinate of this three-dimensional coordinate system, so human eye is as follows at the coordinate of this three-dimensional coordinate system:
xE=g(dE)×(L/2-xE_image)
yE=dE×tan(θ)-(yE_image-W/2)sin(θ)
G (d whereinE) be the coefficient relevant with distance, L is the width of images acquired, and θ is the angle of inclination of camera, and W is the height of images acquired.In like manner, can obtain the coordinate x of finger tip by aforementioned calculationF, yF
(4) after system obtains human eye and the coordinate of finger tip at this three-dimensional coordinate system, will calculate the position that the user points to the screen position according to human eye and 2 definite straight lines of finger tip.Fig. 5 is the coordinate setting model that the coordinate in this three-dimensional coordinate system is determined the screen position according to human eye and finger tip, and according to the proportionate relationship of this model, the coordinate that can get finger tip sensing screen position is as follows:
x=xE+(xF-xE)×dE(dE-dF)
y=yE+(yF-yE)×dE(dE-dF)
When having determined the user behind the position of pointing on the screen,, can calculate the logical coordinates of mouse on screen according to the size of screen.
In the present embodiment, gesture identification is the clicking operation of coming analog mouse by the click action that detects user's finger.Point to screen and when mobile, will be considered as mouse moving when the user stretches out one on right hand finger.When user's right finger closes for the first time, will be considered as pressing left mouse button; This moment is mobile as if right finger is stretched out, and will be considered as pinning the left button rolling mouse; When right finger closes once more, will be considered as having discharged left button.Point to screen when the user stretches out finger of left hand, when then left-hand finger being closed, will be considered as pressing right mouse button; After left-hand finger is stretched out, will be considered as discharging right mouse button.

Claims (8)

(5) screen location: when the user stretched out a finger and points to screen, facial image that system collects according to image collecting device and hand image-region area size calculated human eye and the finger tip distance to screen; Coordinate conversion in image is human eye and the coordinate of finger tip in the three-dimensional coordinate system that is changed to initial point with image collector with human eye and finger tip; By human eye and 2 definite straight lines of finger tip, the point that this straight line and screen intersect is exactly the position that the user points to screen, calculates the coordinate of this position in described three-dimensional coordinate system according to the proportionate relationship of people's eye coordinates and finger tip coordinate; According to the size of screen, be the logical coordinates of mouse on screen with the coordinate transformation of described position;
(6) gesture identification:, point to screen and when mobile, will be considered as mouse moving when the user stretches out finger of the right hand by the clicking operation of judging that the finger click action is come analog mouse; When this right finger closes for the first time, will be considered as pressing left mouse button; To be considered as pin left button rolling mouse if right finger is stretched out sensing screen and mobile this moment; When this right finger closes once more, will be considered as having discharged left button; Point to screen when the user stretches out finger of left hand, when then left-hand finger being closed, will be considered as pressing right mouse button; After left-hand finger is stretched out, will be considered as discharging right mouse button.
7. device of realizing each described method of claim 1~5, it is characterized in that comprising image collecting device, image processing module and wireless transport module, camera in the image collecting device is positioned over screen upper end central authorities, is responsible for gathering user's image and being input in the image processing module; Image processing module is responsible for controlling other two modules, moves various image processing algorithms the user images of gathering is carried out analyzing and processing, the user is pointed to the position of screen and the variation of gesture is converted into logical coordinates and the steering order of computing machine on screen; Wireless transport module comprises receiver module and sending module, and sending module is connected with image processing module, is responsible for result is arrived receiver module by radio signal transmission; Receiver module links to each other with computing machine, is responsible for that result is converted into the mouse control signal and is input in the computing machine.
CN2008100301944A2008-08-152008-08-15Human-machine interaction method and device based on sight tracing and gesture discriminatingExpired - Fee RelatedCN101344816B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN2008100301944ACN101344816B (en)2008-08-152008-08-15Human-machine interaction method and device based on sight tracing and gesture discriminating

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN2008100301944ACN101344816B (en)2008-08-152008-08-15Human-machine interaction method and device based on sight tracing and gesture discriminating

Publications (2)

Publication NumberPublication Date
CN101344816A CN101344816A (en)2009-01-14
CN101344816Btrue CN101344816B (en)2010-08-11

Family

ID=40246828

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN2008100301944AExpired - Fee RelatedCN101344816B (en)2008-08-152008-08-15Human-machine interaction method and device based on sight tracing and gesture discriminating

Country Status (1)

CountryLink
CN (1)CN101344816B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102520794A (en)*2011-12-062012-06-27华映视讯(吴江)有限公司Gesture recognition system and method
WO2016070827A1 (en)*2014-11-072016-05-12中兴通讯股份有限公司Method and apparatus for issuing and transmitting recognition information, and information recognition system

Families Citing this family (110)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP5614014B2 (en)*2009-09-042014-10-29ソニー株式会社 Information processing apparatus, display control method, and display control program
CN101694692B (en)*2009-10-222011-09-07浙江大学Gesture identification method based on acceleration transducer
CN101719015B (en)*2009-11-032011-08-31上海大学Method for positioning finger tips of directed gestures
CN101776953A (en)*2009-12-292010-07-14胡世曦Optical positioning method and finger mouse integrated with keyboard
CN101783865A (en)*2010-02-262010-07-21中山大学Digital set-top box and intelligent mouse control method based on same
CN101799717A (en)*2010-03-052010-08-11天津大学Man-machine interaction method based on hand action catch
CN102192173A (en)*2010-03-082011-09-21艾美特电器(深圳)有限公司Intelligent gesture control electric fan
CN101813976A (en)*2010-03-092010-08-25华南理工大学Sighting tracking man-computer interaction method and device based on SOC (System On Chip)
CN107256094A (en)*2010-04-132017-10-17诺基亚技术有限公司Device, method, computer program and user interface
CN102222342A (en)*2010-04-162011-10-19上海摩比源软件技术有限公司Tracking method of human body motions and identification method thereof
CN102270035A (en)*2010-06-042011-12-07三星电子株式会社Apparatus and method for selecting and operating object in non-touch mode
FR2960986A1 (en)2010-06-042011-12-09Thomson Licensing METHOD FOR SELECTING AN OBJECT IN A VIRTUAL ENVIRONMENT
CN102279669A (en)*2010-06-112011-12-14游森溢 Using an input system that displays images
CN101872244B (en)*2010-06-252011-12-21中国科学院软件研究所Method for human-computer interaction based on hand movement and color information of user
CN101901339B (en)*2010-07-302012-11-14华南理工大学Hand movement detecting method
CN101923433A (en)*2010-08-172010-12-22北京航空航天大学 A Human-Computer Interaction Method Based on Hand Shadow Recognition
CN102402279B (en)*2010-09-172016-05-25腾讯科技(深圳)有限公司Man-machine interaction method based on gesture and system
US8963836B2 (en)2010-09-172015-02-24Tencent Technology (Shenzhen) Company LimitedMethod and system for gesture-based human-machine interaction and computer-readable medium thereof
KR101522991B1 (en)*2010-09-222015-05-26시마네켄Operation Input Apparatus, Operation Input Method, and Program
CN102457607B (en)*2010-10-202016-05-04浪潮乐金数字移动通信有限公司A kind of video sensing input mobile communication terminal and video sensing input method thereof
KR101731346B1 (en)*2010-11-122017-04-28엘지전자 주식회사Method for providing display image in multimedia device and thereof
JP5300825B2 (en)*2010-11-172013-09-25シャープ株式会社 Instruction receiving device, instruction receiving method, computer program, and recording medium
JP5653206B2 (en)*2010-12-272015-01-14日立マクセル株式会社 Video processing device
CN102081503B (en)*2011-01-252013-06-26汉王科技股份有限公司Electronic reader capable of automatically turning pages based on eye tracking and method thereof
CN102622081B (en)*2011-01-302016-06-08北京新岸线移动多媒体技术有限公司A kind of realize the mutual method of body sense and system
JP5885395B2 (en)*2011-04-282016-03-15オリンパス株式会社 Image capturing apparatus and image data recording method
CN102142084B (en)*2011-05-062012-12-26北京网尚数字电影院线有限公司Method for gesture recognition
RU2611977C2 (en)*2011-05-092017-03-01Конинклейке Филипс Н.В.Rotating object on screen
CN102200834B (en)*2011-05-262012-10-31华南理工大学 Fingertip Mouse Interaction Method Oriented to TV Control
CN102184021B (en)*2011-05-272013-06-12华南理工大学Television man-machine interaction method based on handwriting input and fingertip mouse
RU2455676C2 (en)2011-07-042012-07-10Общество с ограниченной ответственностью "ТРИДИВИ"Method of controlling device using gestures and 3d sensor for realising said method
US9251409B2 (en)2011-10-182016-02-02Nokia Technologies OyMethods and apparatuses for gesture recognition
TWI571772B (en)*2011-10-312017-02-21財團法人資訊工業策進會Virtual mouse driving apparatus and virtual mouse simulation method
CN102411478B (en)*2011-11-162013-10-09鸿富锦精密工业(深圳)有限公司 Electronic device and its text guiding method
CN102411477A (en)*2011-11-162012-04-11鸿富锦精密工业(深圳)有限公司Electronic equipment and text reading guide method thereof
CN102402289B (en)*2011-11-222014-09-10华南理工大学Mouse recognition method for gesture based on machine vision
KR101890459B1 (en)*2011-12-062018-08-21톰슨 라이센싱Method and system for responding to user's selection gesture of object displayed in three dimensions
CN103164022B (en)*2011-12-162016-03-16国际商业机器公司Many fingers touch method and device, portable terminal
CN103177373A (en)*2011-12-222013-06-26上海易络客网络技术有限公司Interactive commercial experiencing method and system
CN102592115B (en)*2011-12-262014-04-30Tcl集团股份有限公司Hand positioning method and system
US10013053B2 (en)*2012-01-042018-07-03Tobii AbSystem for gaze interaction
CN103218105B (en)*2012-01-192016-07-06联想(北京)有限公司The processing method of electronic equipment, system and electronic equipment
CN103294173A (en)*2012-02-242013-09-11冠捷投资有限公司 Remote control system and method based on user actions
CN102662464A (en)*2012-03-262012-09-12华南理工大学Gesture control method of gesture roaming control system
CN102710908A (en)*2012-05-312012-10-03无锡商业职业技术学院Device for controlling television based on gesture
CN103513906B (en)*2012-06-282018-01-16联想(北京)有限公司A kind of command identifying method, device and electronic equipment
TWI498771B (en)*2012-07-062015-09-01Pixart Imaging Inc Glasses that can recognize gestures
EP2879097A4 (en)*2012-07-272016-03-16Nec Solution Innovators LtdThree-dimensional user-interface device, and three-dimensional operation method
CN103677224B (en)*2012-09-032017-04-19联想(北京)有限公司 An information processing method and electronic device
US9201500B2 (en)*2012-09-282015-12-01Intel CorporationMulti-modal touch screen emulator
CN103777744A (en)*2012-10-232014-05-07中国移动通信集团公司Method and device for achieving input control and mobile terminal
CN108845668B (en)*2012-11-072022-06-03北京三星通信技术研究有限公司 Human-computer interaction system and method
CN102981742A (en)*2012-11-282013-03-20无锡市爱福瑞科技发展有限公司Gesture interaction system based on computer visions
TWI581127B (en)*2012-12-032017-05-01廣達電腦股份有限公司Input device and electrical device
US9141198B2 (en)*2013-01-082015-09-22Infineon Technologies AgControl of a control parameter by gesture recognition
CN104142741B (en)*2013-05-082017-04-12宏碁股份有限公司 Electronic device and touch detection method thereof
EP3005303B1 (en)*2013-05-242018-10-03Thomson LicensingMethod and apparatus for rendering object for multiple 3d displays
CN103309446B (en)*2013-05-302016-03-02上海交通大学The virtual data being carrier with mankind's both hands obtains and transmission system
CN103442177A (en)*2013-08-302013-12-11程治永PTZ video camera control system and method based on gesture identification
CN103488299B (en)*2013-10-152016-11-23大连市恒芯科技有限公司 A human-computer interaction method for intelligent terminals that integrates human faces and gestures
CN103577075B (en)*2013-11-112017-07-11惠州Tcl移动通信有限公司The parameter adjusting method and device of a kind of electronic equipment
CN104680123A (en)*2013-11-262015-06-03富士通株式会社Object identification device, object identification method and program
CN103616954A (en)*2013-12-062014-03-05Tcl通讯(宁波)有限公司Virtual keyboard system, implementation method and mobile terminal
CN103690146A (en)*2013-12-132014-04-02重庆大学Novel eye tracker
CN103761508A (en)*2014-01-022014-04-30大连理工大学 A biometric identification method and system integrating face and gesture
CN103793060B (en)*2014-02-142017-07-28杨智A kind of user interactive system and method
CN104866081A (en)*2014-02-252015-08-26中兴通讯股份有限公司Terminal operation method and device as well as terminal
CN104932668A (en)*2014-03-202015-09-23冠捷投资有限公司 Play content driving device and method for display system
CN103888820A (en)*2014-03-282014-06-25湖南网圣腾飞信息技术有限公司Game control system and method with smart mobile phone and intelligent television combined
CN104978012B (en)2014-04-032018-03-16华为技术有限公司One kind points to exchange method, apparatus and system
CN104090663B (en)*2014-07-142016-03-23济南大学 A Gesture Interaction Method Based on Visual Attention Model
CN105260008B (en)2014-07-152018-10-12华为技术有限公司A kind of method and device of position location
CN104181838B (en)*2014-08-072017-03-15重庆电子工程职业学院Gesture control windowpane cleaning device based on technology of Internet of things
CN105491426A (en)*2014-09-162016-04-13洪永川Gesture remote control television system
CN104714639B (en)*2014-12-302017-09-29上海孩子国科教设备有限公司Across the space method and client operated
SG11201707278SA (en)*2015-03-202017-10-30Ricoh Co LtdDisplay apparatus, display control method, display control program, and display system
JP2018516422A (en)*2015-05-282018-06-21アイサイト モバイル テクノロジーズ エルティーディー. Gesture control system and method for smart home
US9858498B2 (en)*2015-09-232018-01-02Qualcomm IncorporatedSystems and methods for incremental object detection using dual-threshold local binary pattern operators
CN105892633A (en)*2015-11-182016-08-24乐视致新电子科技(天津)有限公司Gesture identification method and virtual reality display output device
CN105338390A (en)*2015-12-092016-02-17陈国铭Intelligent television control system
CN105929954B (en)*2016-04-192019-10-18京东方科技集团股份有限公司 Method and device for controlling cursor, and display device
CN106022211B (en)*2016-05-042019-06-28北京航空航天大学Method for controlling multimedia equipment by utilizing gestures
CN106020478B (en)*2016-05-202019-09-13青岛海信电器股份有限公司A kind of intelligent terminal control method, device and intelligent terminal
CN107015636A (en)*2016-10-272017-08-04蔚来汽车有限公司Gesture control method for virtual reality head display equipment
CN106569600A (en)*2016-10-312017-04-19邯郸美的制冷设备有限公司Gesture verification method and device for controlling air conditioners
CN106778597B (en)*2016-12-122020-04-10朱明�Intelligent vision detector based on image analysis
DE102016124906A1 (en)*2016-12-202017-11-30Miele & Cie. Kg Method of controlling a floor care appliance and floor care appliance
CN107272899B (en)*2017-06-212020-10-30北京奇艺世纪科技有限公司 A VR interaction method, device and electronic device based on dynamic gestures
CN107463257B (en)*2017-08-032020-08-21微景天下(北京)科技有限公司Human-computer interaction method and device of virtual reality VR system
CN107491755B (en)2017-08-162021-04-27京东方科技集团股份有限公司 Method and device for gesture recognition
CN107741784A (en)*2017-10-092018-02-27济南大学 An entertainment interactive method suitable for severely paralyzed patients
CN108108024B (en)*2018-01-022021-01-22京东方科技集团股份有限公司 Dynamic gesture acquisition method and device, and display device
CN110853073B (en)*2018-07-252024-10-01北京三星通信技术研究有限公司Method, device, equipment, system and information processing method for determining attention point
CN109190516A (en)*2018-08-142019-01-11东北大学A kind of static gesture identification method based on volar edge contour vectorization
CN109542219B (en)*2018-10-222021-07-30广东精标科技股份有限公司Gesture interaction system and method applied to intelligent classroom
CN109597489A (en)*2018-12-272019-04-09武汉市天蝎科技有限公司A kind of method and system of the eye movement tracking interaction of near-eye display device
CN109683719B (en)*2019-01-302021-10-22华南理工大学 A visual projection interaction method based on YOLOv3
WO2020183249A1 (en)*2019-03-082020-09-17Indian Institute Of ScienceA system for man-machine interaction in vehicles
CN109917921A (en)*2019-03-282019-06-21长春光华学院 An air gesture recognition method for VR field
CN110297540A (en)*2019-06-122019-10-01浩博泰德(北京)科技有限公司A kind of human-computer interaction device and man-machine interaction method
CN111527468A (en)*2019-11-182020-08-11华为技术有限公司 A method, device and device for remote interaction
CN115812188A (en)*2020-08-062023-03-17华为技术有限公司Activation of inter-device interaction through pointing gesture recognition
CN112363626B (en)*2020-11-252021-10-01广东魅视科技股份有限公司Large screen interaction control method based on human body posture and gesture posture visual recognition
CN112667078B (en)*2020-12-242023-06-09西安电子科技大学 Method, system and computer-readable medium for fast mouse control in multi-screen scene based on line-of-sight estimation
CN113515190A (en)*2021-05-062021-10-19广东魅视科技股份有限公司Mouse function implementation method based on human body gestures
WO2023004553A1 (en)*2021-07-262023-02-02广州视源电子科技股份有限公司Method and system for implementing fingertip mouse
CN114217729B (en)*2021-11-302024-06-04广州宏途数字科技有限公司Man-machine interaction method and system based on visual recognition
CN114967927B (en)*2022-05-302024-04-16桂林电子科技大学Intelligent gesture interaction method based on image processing
CN119806333B (en)*2025-03-122025-05-27深圳市全芯科技集团有限公司Control method and system of intelligent mouse
CN120103983B (en)*2025-05-082025-08-05福建省中通通信有限公司Man-machine interaction method and system integrating AI (advanced technology attachment) voice and intelligent feedback

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102520794A (en)*2011-12-062012-06-27华映视讯(吴江)有限公司Gesture recognition system and method
CN102520794B (en)*2011-12-062014-11-19华映视讯(吴江)有限公司Gesture recognition system and method
WO2016070827A1 (en)*2014-11-072016-05-12中兴通讯股份有限公司Method and apparatus for issuing and transmitting recognition information, and information recognition system

Also Published As

Publication numberPublication date
CN101344816A (en)2009-01-14

Similar Documents

PublicationPublication DateTitle
CN101344816B (en)Human-machine interaction method and device based on sight tracing and gesture discriminating
CN101901052B (en)Target control method based on mutual reference of both hands
Sagayam et al.Hand posture and gesture recognition techniques for virtual reality applications: a survey
CN101853071B (en)Gesture identification method and system based on visual sense
CN103984928B (en)Finger gesture recognition methods based on depth image
CN103488294B (en)A kind of Non-contact gesture based on user's interaction habits controls to map method of adjustment
CN111857334B (en) Human hand gesture letter recognition method, device, computer equipment and storage medium
CN106598227A (en)Hand gesture identification method based on Leap Motion and Kinect
CN106383579A (en)EMG and FSR-based refined gesture recognition system and method
CN102789312B (en)A kind of user interactive system and method
CN102339379A (en)Gesture recognition method and gesture recognition control-based intelligent wheelchair man-machine system
CN102981742A (en)Gesture interaction system based on computer visions
CN103150019A (en)Handwriting input system and method
CN102831404A (en)Method and system for detecting gestures
CN103472916A (en)Man-machine interaction method based on human body gesture recognition
CN107357428A (en)Man-machine interaction method and device based on gesture identification, system
CN102402289A (en)Gesture mouse recognition method based on machine vision
CN103226388A (en)Kinect-based handwriting method
CN109839827B (en)Gesture recognition intelligent household control system based on full-space position information
CN103530892A (en)Kinect sensor based two-hand tracking method and device
CN106886741A (en)A kind of gesture identification method of base finger identification
CN103995595A (en)Game somatosensory control method based on hand gestures
Rehman et al.Two hand gesture based 3D navigation in virtual environments
CN104123008B (en)A kind of man-machine interaction method and system based on static gesture
CN115438691A (en)Small sample gesture recognition method based on wireless signals

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20100811

Termination date:20180815


[8]ページ先頭

©2009-2025 Movatter.jp