Movatterモバイル変換


[0]ホーム

URL:


CN101813976A - Sighting tracking man-computer interaction method and device based on SOC (System On Chip) - Google Patents

Sighting tracking man-computer interaction method and device based on SOC (System On Chip)
Download PDF

Info

Publication number
CN101813976A
CN101813976ACN 201010123009CN201010123009ACN101813976ACN 101813976 ACN101813976 ACN 101813976ACN 201010123009CN201010123009CN 201010123009CN 201010123009 ACN201010123009 ACN 201010123009ACN 101813976 ACN101813976 ACN 101813976A
Authority
CN
China
Prior art keywords
subwindow
human eye
eye
human
right eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010123009
Other languages
Chinese (zh)
Inventor
秦华标
陈荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUTfiledCriticalSouth China University of Technology SCUT
Priority to CN 201010123009priorityCriticalpatent/CN101813976A/en
Publication of CN101813976ApublicationCriticalpatent/CN101813976A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明中公开了基于SOC的视线跟踪人机交互方法及装置。该方法包括:摄像头将采集到的数字图像输入到SOC平台,采用硬件逻辑模块实现基于haar特征的Adaboost检测算法,对所述数字图像进行人眼区域的检测;根据检测到的人眼区域,利用视线方向判别算法,判别出用户视线,再将用户视线方向转化为鼠标控制信号通过USB传输给计算机,实现人机交互。所述装置包括SOC平台、用于采集人眼图像的摄像头、计算机、安装在计算机显示屏上四个角且排列成矩形的四个LED,SOC平台包括人眼区域检测硬件逻辑模块、处理器和存储器。本发明通过硬件实现人眼区域检测、视线方向判别,最终实现人机交互,具有使用方便,准确度高的优点。

Figure 201010123009

The invention discloses a SOC-based eye-tracking human-computer interaction method and device. The method includes: the camera inputs the collected digital image to the SOC platform, adopts a hardware logic module to realize the Adaboost detection algorithm based on the haar feature, and detects the human eye area on the digital image; according to the detected human eye area, using The line of sight direction discrimination algorithm distinguishes the user's line of sight, and then converts the user's line of sight direction into a mouse control signal and transmits it to the computer through USB to realize human-computer interaction. The device includes a SOC platform, a camera for collecting human eye images, a computer, four LEDs arranged in a rectangular shape at four corners of the computer display screen, and the SOC platform includes a human eye area detection hardware logic module, a processor and memory. The invention realizes human eye area detection and line-of-sight direction discrimination through hardware, and finally realizes human-computer interaction, and has the advantages of convenient use and high accuracy.

Figure 201010123009

Description

Eye tracking human-machine interaction method and device based on SOC
Technical field
The present invention relates to SOC (SOC (system on a chip)) designing technique, the vision track algorithm belongs to Flame Image Process and mode identification technology, specifically is a kind of eye tracking human-machine interaction device based on SOC.
Background technology
People's an eye line is played an important role in man-machine interaction, and it has directly, nature and advantage such as two-way.At present eye tracking technology ground zero does not reach the practical stage, successful practicality project seldom and cost an arm and a leg, to the requirement height of hardware.The vision track technology generally can be divided into two classes, contact and contactless.The precision height of contact, but the user must dress special utensil, this can bring very big uncomfortable to the user.The way that contactless general employing is handled based on video image by analyst's eye portion spectral discrimination direction of visual lines, does not produce interference to the user, uses convenient.
Human-computer interaction device based on eye tracking, the main direction of research is based on computer platform or the higher flush bonding processor of performance at present, the pure software operation, but because its algorithm computation complexity height, occupying system resources is many, is unfavorable for that the user utilizes this system to do other complicated operations on computers.Realize the limitation of eye tracking algorithm on human-computer interaction device in view of pure software, can utilize the concurrency and the stream line operation of hardware logic, calculated amount major part in the eye tracking algorithm is realized with hardware, improve algorithm efficiency greatly.Find that by prior art documents Shang Weiyou reported based on the eye tracking human-machine interaction method of SOC and device.
Summary of the invention
The present invention overcomes the deficiency in the existing eye tracking technology, and eye tracking human-machine interaction method and device based on SOC are provided.The present invention realizes on the SOC platform by rational hardware-software partition, is the higher part of complexity that realize with hardware the human eye area test section, improves algorithm efficiency greatly.The present invention is achieved through the following technical solutions:
A kind of eye tracking human-machine interaction method based on SOC, this method comprises the steps:
(1) camera is input to the SOC platform with the digital picture that collects, and adopts the Adaboost detection algorithm of hardware logic module realization based on the haar feature, described digital picture is carried out the detection of human eye area;
(2) according to detected human eye area, utilize direction of visual lines to differentiate algorithm, determine user's sight line, again user's direction of visual lines is converted into the mouse control signal and is transferred to computing machine by USB, realize man-machine interaction.
In the above-mentioned man-machine interaction method, described hardware logic module comprises as lower module:
Integration module is finished the integration of digital picture and the calculating of integrated square, and result of calculation is left on the storer; The subwindow scan module travels through by setting step-length horizontal ordinate and the ordinate of putting in order frame of digital image subwindow, draws the coordinate and the length and width of subwindow to be measured;
The subwindow processing module judges whether subwindow to be measured is the human eye subwindow;
The subwindow Fusion Module carries out fusion treatment to all human eye subwindows that determine, and promptly integrates the close window in position, readjusts human eye the window's position then, determines human eye area.
In the above-mentioned man-machine interaction method, described subwindow processing module is according to the right eye sorter that adopts Modesto Castrill ó n training, utilization Cascade Cascading Methods realize the subwindow processing, and concrete steps comprise: the haar characteristic parameter that at first extracts the right eye sorter; The haar characteristic parameter is specialized, subwindow size after being about to the haar characteristic parameter and scanning is mated, read the integration data that integration module calculates according to the position of specializing rectangular area in the haar characteristic parameter of back again, use the Cascade Cascading Methods to determine the human eye subwindow at last.
In the above-mentioned man-machine interaction method, described utilization Cascade Cascading Methods determine that the human eye subwindow is that the right eye strong classifier is formed in a plurality of right eye Weak Classifier weightings, again determining of human eye subwindow finished in 20 grades of right eye strong classifier series connection, concrete steps comprise: at first reach the integration data that reads according to the haar characteristic parameter in each right eye Weak Classifier from integration module, calculate the haar eigenwert of actual subwindow, threshold value with current right eye Weak Classifier compares again, determine the weights of this right eye Weak Classifier, weights with these right eye Weak Classifiers add up at last, threshold value with the right eye strong classifier compares again, if greater than this threshold value, then pass through the checking of this grade right eye strong classifier, enter the differentiation of next stage right eye strong classifier,, think that then this subwindow is non-human eye area if less than this threshold value; When the checking of subwindow, then can be defined as the human eye subwindow by 20 grades of right eye strong classifiers.
In the above-mentioned man-machine interaction method, it is four reflection bright spots that form on eye cornea according to four LED infrared light supplies that are positioned on four angles of computer screen that described direction of visual lines is differentiated algorithm, be the pul spot of admiring, and the geometry site between the pupil center calculate direction of visual lines.
In the above-mentioned man-machine interaction method, described direction of visual lines is differentiated the algorithm concrete steps and is comprised: at first adopt Gray Projection method location pupil center, it with the pupil center center, thereon down about the search pul spot of admiring in the zone of 30 pixels, calculate the relation of pupil center and described four reflection bright spots, determine its direction of visual lines.
Realize the human-computer interaction device based on SOC of above-mentioned man-machine interaction method, this device comprises the SOC platform, is used to gather the camera of eye image, computing machine, be installed in four angles on the computer display and be arranged in four LED of rectangle, and the SOC platform comprises human eye area detection hardware logic module, processor and storer; Described camera is input to storer on the SOC platform with the digital picture that collects; Computing machine is connected with the SOC platform by USB; Human eye area detection hardware logic module is finished the detection of human eye area, processor is according to detected human eye area, differentiate algorithm identified in conjunction with direction of visual lines and go out user's direction of visual lines, will the control signal corresponding be converted into the mouse control signal and be transferred to computing machine by USB with user's direction of visual lines.
In the said apparatus, described computer screen is divided into four zones by two diagonal line, the zone that the processor of SOC platform is watched attentively according to eyes of user, user's sight line is divided into the upper and lower, left and right four direction, the analog mouse locomotive function, as the control information of user input, and be used for confirming that by the action nictation sight line control information, analog mouse click and import user's confirmation.
In the said apparatus, described action nictation is to be used for sending the affirmation command information with moving 1~3 second nictation.
The present invention will be applied to human-computer interaction device based on the eye tracking technology of SOC, fill up the blank of China's this respect.Follow the tracks of people's eye fixation sight line by the camera that is connected in the SOC platform, because two diagonal line of screen can be divided into four zones to screen, then four kinds of control informations that four kinds of different sight directions in these four zones of people's eye fixation are sent as the user, simultaneously moved this nictation of closing one's eyes of 1~3 second and be used for sending the affirmation command information, again this platform is connected to computing machine, realizes the basic operational functions of mouse.
Compared with prior art, advantage of the present invention and good effect are:
1, for the high eye tracking algorithm of computation complexity, by rational hardware-software partition, on the SOC platform, fully operations such as the concurrency of utilization hardware logic and streamline, the module utilization hardware logic module (be human eye area detect IP kernel) big the total system algorithm operation quantity realizes, improve algorithm efficiency greatly, it is many to have solved pure software implementation algorithm occupying system resources, the defective that efficient is lower.
2, adopt eye tracking technology of the present invention, having realized one, to take resource little, the human-computer interaction device that real-time is high, and concrete function has:
Can realize the basic operation of computing machine with sight line, as open webpage, the page turning up and down of e-book etc.;
Can be used as the human-computer interaction device of virtual reality, in reality environment, according to the user current watch state attentively, offer the corresponding scene information of watching the orientation attentively of user, the user will see different scenes in different orientation like this, reaches effect on the spot in person.Interactive mode in the mutual and real world of people and intercomputer is reached unanimity, more simple, natural, efficiently.
Description of drawings
Fig. 1 is that the eye tracking human-machine interaction device based on SOC in the embodiment of the present invention constitutes block diagram.
Fig. 2 is the layout synoptic diagram of display screen in the embodiment of the present invention, infrared light supply and camera.
Fig. 3 is the schematic flow sheet of sight tracing in the embodiment of the present invention.
Fig. 4 is the schematic flow sheet that human eye area detects IP kernel in the embodiment of the present invention.
Fig. 5 is an embodiment of the present invention neutron window treatments streamline synoptic diagram.
Fig. 6 a~Fig. 6 c is respectively three kinds of synoptic diagram of haar feature in the embodiment of the present invention.
Fig. 7 is that the haar feature is asked the method synoptic diagram in the embodiment of the present invention.
Fig. 8 is that the pul spot of admiring constitutes the diagonal line intersection point synoptic diagram of rectangle in the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing the specific embodiment of the present invention is described further.
As shown in Figure 1 and Figure 2, eye tracking human-machine interaction device based on SOC, comprise the SOC platform, be used to gather eye image camera, computing machine, be installed in four angles on the computer display and be arranged in four LED (infrared light supply) of rectangle, the SOC platform comprises human eye area detection hardware logic module (being that human eye area detects IP kernel), processor and storer; Described camera is input to storer on the SOC platform with the digital picture that collects; Computing machine is connected with the SOC platform by USB; Human eye area detection hardware logic module is finished the detection of human eye area, processor is according to detected human eye area, differentiate algorithm identified in conjunction with direction of visual lines and go out user's direction of visual lines, according to user's direction of visual lines control signal corresponding is converted into the mouse control signal again and is transferred to computing machine by USB.
In the present embodiment, infrared light supply is mounted in four LED lamps of four jiaos of display screens, and camera is positioned under the screen center, the digital picture input eye tracking module of camera collection.Infrared light supply forms the reflection bright spot on eye cornea surface, i.e. the pul spot of admiring, and be that reference point calculates the human eye direction of visual lines with the pul spot of admiring.Infrared light supply and camera putting position as shown in Figure 1, four infrared LED lamps are installed on four angles of screen, camera is placed under the screen center.Camera adopts the common camera of 640 * 480 pixels, for increasing the susceptibility of camera to infrared light supply, is its lens changing to infrared more responsive camera lens, for fear of the influence of extraneous lamp, adds optical filter before camera lens simultaneously.Reflection bright spot corresponding in LED lamp and the image on the screen, sight line is corresponding with pupil center location.Four reflection bright spots constitute a rectangle, and two diagonal line are divided into four zones to rectangle, and pupil center is positioned at the direction that people's an eye line has just been represented in which zone.
One embodiment of the present of invention, as shown in Figure 3, at first by the camera collection user images, detect according to human eye area then and whether exist human eye to judge in the IP kernel detected image currently whether to have the user to use this system, after only detecting human eye, just carry out follow-up processing.Detecting on the basis of human eye, differentiating,, then sending the mouse-click signal to computing machine,, then carrying out the differentiation of direction of visual lines if differentiate for opening eyes by the USB line if differentiate for closing one's eyes by the condition discrimination algorithm of blinking.Again direction of visual lines information is sent to computing machine by the USB line.
In the present embodiment, the inner frame of human eye area detection IP kernel as shown in Figure 4.Whether have human eye to exist in the image by judging based on the Adaboost human eye detection algorithm of haar feature, concrete implementation step is as follows:
Step: image integration.The image integration module is finished the integration of digital picture and the calculating of integrated square, and also result of calculation is left on the SRAM;
Step 2: subwindow scanning.The subwindow scan module is finished the traversal to whole two field picture subwindow.
Step 3: subwindow is handled.Whether the subwindow processing module is finished subwindow is the judgement of human eye window.
Step 4: subwindow merges.The subwindow Fusion Module is integrated detected all subwindows, removes adjacent human eye subwindow, draws position of human eye.
Wherein the concrete implementation step ofstep 1 is: image pixel data is left in the storer on the SOC platform (the outer SRAM of sheet), with a register holds row pixel grey scale accumulated value.In order to accelerate arithmetic speed, in the SOC sheet, generate a RAM, be used to preserve the integration data of lastrow, to reduce visit to the outer SRAM of sheet.The integration data of the intact coordinate points of every calculating just is written to these data in the external SRAM, overrides the corresponding integration data of ram in slice simultaneously so that calculate needs next time.
Wherein the concrete implementation step ofstep 2 is: with the traversal of a state machine realization to subwindow.At first, horizontal ordinate and the ordinate of putting in order frame of digital image subwindow traveled through by setting step-length, be multiplied by amplification coefficient afterwards, carry out horizontal vertical traversal once more, draw the coordinate and the length and width of subwindow to be measured.
Wherein the concrete implementation step ofstep 3 is: adopt the right eye sorter feature of Modesto Castrill ó n training, the method for utilization Cascade cascade realizes the subwindow processing, and concrete steps comprise: the haar characteristic parameter that at first extracts the right eye sorter; The haar characteristic parameter is specialized, subwindow size after being about to the haar characteristic parameter and scanning is mated, read the integration data that integration module calculates according to the position of rectangular area in the haar characteristic parameter again, use the method for Cascade cascade to determine the human eye subwindow at last.For speed up processing, the right eye classifier data is kept on the ROM that opens up in the SOC sheet.Read out the right eye classifier data in the .xml file from announced OpenCV 1.0, save as the .mif file layout and be used for initialization ROM.As shown in Figure 5, the computation capability of application hardware logic designs by pipeline processes for the differentiation of each sorter.
The Cascade of utilization described in the above-mentioned steps three Cascading Methods determine that the human eye subwindow is meant a plurality of right eye Weak Classifier weightings composition right eye strong classifiers, finishes 20 grades of right eye strong classifiers series connection the detection of human eye area again.Among the present invention at different levels grades of right eye strong classifiers each by right eye Weak Classifier 10,10,16,20,16,20,24,30,34,38,38,42,44,48,48,56,52,58,68,64 compositions.Concrete steps comprise: at first reach the integration data that reads according to the haar characteristic parameter in each Weak Classifier from integration module, calculate the haar eigenwert of actual subwindow, threshold value with current right eye Weak Classifier compares again, determine the weights of this right eye Weak Classifier, weights with these right eye Weak Classifiers add up at last, threshold value with the right eye strong classifier compares again, if greater than this threshold value, then pass through this grade right eye strong classifier, the differentiation that can enter next stage right eye strong classifier.Otherwise, think that then this subwindow is non-human eye area.When the checking of subwindow, then can be defined as the human eye subwindow by 20 grades of right eye strong classifiers.
Haar feature wherein, also be rectangular characteristic, it is such as edge, line segment, relatively more responsive to some simple graphic structures, the structure at particular orientation (level, vertical, center) can be described, as shown in Figure 6, these characteristic presents the local haar feature of image, wherein two rectangular characteristic has characterized respectively up and down and left and right sides boundary characteristic among the figure (a), the rectangular characteristic of figure (b) has characterized the fine rule feature, and the rectangular characteristic of figure (c) has characterized the diagonal line feature.Some features of eyes can simply be described by rectangular characteristic, and for example, eyebrow is darker than the color of eyelid, and present the up-and-down boundary feature, and the eyes edge is darker than the color of eyelid, and present left and right sides boundary characteristic.
As shown in Figure 6, the haar eigenwert ask method be all pixels and that deduct all pixels in the grey rectangle zone in the white rectangle zone and.Concrete calculation procedure is: at first according to the particular location of the rectangular area in the haar characteristic parameter, extract the integration data of four points in rectangular area, then can calculate the rectangular area that requires all pixels with.Illustrate as Fig. 7, the rectangle among the figure is image to be asked, and A, B, C, D are respectively the several rectangular areas in the image, and the integrogram element value calculates: the integrated value of point " 1 " is the pixel value sum of all pixels among the rectangle frame A.The pairing value of integrated value of point " 2 " is A+C, and the integrated value of point " 3 " is A+B, and the integrated value of point " 4 " is A+B+C+D, so all pixel value sums can use 4+1-(2+3) to calculate among the D.Therefore in the image in any rectangle the value sum of all pixels can calculate by similar four rectangles as above, promptly calculate by the integration data of four points.At last to haar feature indication rectangular area in the right eye sorter, carry out pixel and subtract each other, draw the haar eigenwert.
Wherein the concrete implementation step of step 4 is: all human eye subwindows that determine are carried out fusion treatment, orient position of human eye.Because more than one of the human eye subwindow thatstep 3 is determined, and situation such as cross one another, comprise between the human eye subwindow, we can merge them by their position, the condition handle of size, reduce the situation that window occurs that repeats, promptly integrate the close window in position, readjust human eye the window's position then.
In the present embodiment, nictation, condition discrimination was by adding up the number of two-value ocular black picture element in every two field picture, and compare with former frame, utilize the relation between the interframe black picture element number, differentiate and whether exist human eye to transfer action nictation of closing to by opening.Detailed process is as follows:
If the i two field picture is Fi, human eye area is Di
1. among the statistics Fi, region D i gray-scale value is less than 150 pixel number Ci;
2. among the statistics Fi+1, region D i gray-scale value is less than 150 pixel number Ci ';
3. as if Ci/Ci '>0.9, then think the incident of closing one's eyes to occur;
4. if continuous some frames detect less than eyes after the possible incident of closing one's eyes occurring, then be defined as closing one's eyes.
In the present embodiment, the judgement of direction of visual lines is by detecting the admire position of spot of pupil center and pul, asks the admire position of spot of pupil center and pul to concern by geometrical calculation again, thus the differentiation direction of visual lines.Concrete steps are as follows:
Step 1: location pupil center, and utilize the admire relation of spot of pupil center location and pul with the admire geometric properties of spot of pul, searches out the pul spot of admiring.
Step 2: ask for pupil center and the pul position relation of spot of admiring by geometrical calculation, thereby differentiate direction of visual lines;
Wherein the concrete implementation step ofstep 1 is:
In the eye areas, the pul spot of admiring has following geometric properties:
1. be positioned at around the pupil, with pupil center distance less than 30 pixels;
2. size is that 5~20 pixels do not wait, and gray-scale value is more than 100;
3. in eye areas, the gray-scale value at bright spot place respectively has a maximum value, and under ideal conditions, the admire sudden change maximum of spot place gray scale of four puls;
4. four puls are admired between the spot distance in 8~18 pixel coverages, and are approximated to the rectangle relation;
Therefore seek the pul spot (being bright spot) of admiring by following steps:
1. with the center of the legal position of horizontal Gray Projection and vertical Gray Projection pupil, with the scope of each 30 pixel up and down at this center as the region of search;
2. in the region of search, seek the gray scale extreme point, promptly seek the point set G that gray-scale value meets the following conditions:
g(x0,y0)≥max{g(x,y)},g(x0,y0)>100
G (x wherein0, y0) be (x0, y0) point gray-scale value.
3. with Laplace operator as follows each point among the point set G is carried out convolution with point around it, obtain the differential f at each some g place.
-1-1-1-1-1-1-1-1-1-1-1-124-1-1-1-1-1-1-1-1-1-1-1-1
Because the Laplace operator is a kind of isotropic differentiating operator, its effect is the zone of emphasizing gray scale sudden change in the image, and Laplace convolution intermediate value is big more, illustrates that the sudden change of this place's gray scale is big more.Adopt 5 * 5 Laplace operator, can further avoid noise;
4. point set G is sorted by its differential value f, select four some P0~P3 of f maximum, as candidate point;
5. check P0~P3 if can form rectangle, determines that then P0~P3 is four puls spots of admiring, otherwise abandons current frame image.
Wherein the concrete implementation step ofstep 2 is:
According to physics and method of geometry, can determine the one-to-one relationship of screen and images acquired, promptly the LED at four angles of screen is corresponding with four bright spots in the eye image, and direction of visual lines is corresponding with pupil center.Therefore according to this corresponding relation, just can judge direction of visual lines, as shown in Figure 8 by the eye image of gathering, if P0, P1, P2, P3 are detected four puls spot of admiring, Q is a pupil center, utilizes section formula to obtain the coordinate of the diagonal line intersection point O of P0~P3, link OQ, OP0, OP1, OP2, OP3, OP0~OP3 is divided into four zones to the rectangle that is connected into by P0~P3, calculates OQ and is in which zone, just can calculate direction of visual lines.Concrete grammar is as follows:
1. with reference to the accompanyingdrawings 3, ask the diagonal line intersection point O of P0~P3 with the method for computational geometry:
Definition by leg-of-mutton area formula and cross product has:
|P1O||P3O|=SΔP0P1P2SΔP0P3P2=P0P1→×P0P2→P0P1→×P0P3→
WhereinBe triangle p0p1p2Area,
Figure GSA00000054980300083
Be triangle p0p3p2Area.
By fixed formula, can obtain the x coordinate that O orders and be than branch:
x0=SΔP0P1P2·xP1+SΔP0P3P2·xP3SΔP0P1P2+SΔP0P3P2
In like manner also can obtain the y coordinate that O is ordered.
2. link OQ, OP0, OP1, OP2, OP3, then the zone at the Q of pupil center place can be obtained according to following relationship:
Area 0: OQ drops between OP0 and the OP1, corresponding direction of visual lines be " on "
Zone 1:OQ drops between OP1 and the OP2, and corresponding direction of visual lines is " right side "
Zone 2:OQ drops between OP2 and the OP3, and corresponding direction of visual lines is a D score
Zone 3:OQ drops between OP3 and the OP0, and corresponding direction of visual lines is " left side "
In the present embodiment, user's direction of visual lines is converted into user's direction of visual lines USB mouse control signal again and is transferred to computing machine by the USB line after determining, realizes man-machine interaction.

Claims (9)

1. the eye tracking human-machine interaction method based on SOC is characterized in that this method comprises the steps:
(1) camera is input to the SOC platform with the digital picture that collects, and adopts the Adaboost detection algorithm of hardware logic module realization based on the haar feature, described digital picture is carried out the detection of human eye area;
(2) according to detected human eye area, utilize direction of visual lines to differentiate algorithm, determine user's sight line, again user's direction of visual lines is converted into the mouse control signal and is transferred to computing machine by USB, realize man-machine interaction.
2. man-machine interaction method according to claim 1 is characterized in that described hardware logic module comprises as lower module: integration module, and finish the integration of digital picture and the calculating of integrated square, and result of calculation is left on the storer; The subwindow scan module travels through by setting step-length horizontal ordinate and the ordinate of putting in order frame of digital image subwindow, draws the coordinate and the length and width of subwindow to be measured;
The subwindow processing module judges whether subwindow to be measured is the human eye subwindow;
The subwindow Fusion Module carries out fusion treatment to all human eye subwindows that determine, and promptly integrates the close window in position, readjusts human eye the window's position then, determines human eye area.
3. method according to claim 2, it is characterized in that described subwindow processing module is according to the right eye sorter that adopts ModestoCastrill ó n training, utilization Cascade Cascading Methods realize the subwindow processing, and concrete steps comprise: the haar characteristic parameter that at first extracts the right eye sorter; The haar characteristic parameter is specialized, subwindow size after being about to the haar characteristic parameter and scanning is mated, read the integration data that integration module calculates according to the position of specializing rectangular area in the haar characteristic parameter of back again, use the Cascade Cascading Methods to determine the human eye subwindow at last.
4. method according to claim 3, it is characterized in that described utilization Cascade Cascading Methods determine that the human eye subwindow is that the right eye strong classifiers are formed in a plurality of right eye Weak Classifier weightings, again determining of human eye subwindow finished in 20 grades of right eye strong classifier series connection, concrete steps comprise: at first reach the integration data that reads according to the haar characteristic parameter in each right eye Weak Classifier from integration module, calculate the haar eigenwert of actual subwindow, threshold value with current right eye Weak Classifier compares again, determine the weights of this right eye Weak Classifier, weights with these right eye Weak Classifiers add up at last, threshold value with the right eye strong classifier compares again, if greater than this threshold value, then pass through the checking of this grade right eye strong classifier, enter the differentiation of next stage right eye strong classifier, if less than this threshold value, think that then this subwindow is non-human eye area; When the checking of subwindow, then can be defined as the human eye subwindow by 20 grades of right eye strong classifiers.
5. man-machine interaction method according to claim 1, it is characterized in that it is four reflection bright spots that form according to four LED infrared light supplies that are positioned on four angles of computer screen that described direction of visual lines is differentiated algorithm on eye cornea, be the pul spot of admiring, and the geometry site between the pupil center calculate direction of visual lines.
6. man-machine interaction method according to claim 4, it is characterized in that described direction of visual lines differentiation algorithm concrete steps comprise: at first adopt Gray Projection method location pupil center, it with the pupil center center, thereon down about the search pul spot of admiring in the zone of 30 pixels, calculate the relation of pupil center and described four reflection bright spots, determine its direction of visual lines.
7. human-computer interaction device of realizing each described man-machine interaction method of claim 1~6 based on SOC, it is characterized in that comprising the SOC platform, be used to gather eye image camera, computing machine, be installed in four angles on the computer display and be arranged in four LED of rectangle, the SOC platform comprises human eye area detection hardware logic module, processor and storer; Described camera is input to storer on the SOC platform with the digital picture that collects; Computing machine is connected with the SOC platform by USB; Human eye area detection hardware logic module is finished the detection of human eye area, processor is according to detected human eye area, differentiate algorithm identified in conjunction with direction of visual lines and go out user's direction of visual lines, will the control signal corresponding be converted into the mouse control signal and be transferred to computing machine by USB with user's direction of visual lines.
8. device according to claim 7, it is characterized in that described computer screen is divided into four zones by two diagonal line, the zone that the processor of SOC platform is watched attentively according to eyes of user, user's sight line is divided into the upper and lower, left and right four direction, the analog mouse locomotive function, as the control information of user input, and be used for confirming that by the action nictation sight line control information, analog mouse click and import user's confirmation.
9. device according to claim 8 is characterized in that described action nictation is to be used for sending the affirmation command information with moving 1~3 second nictation.
CN 2010101230092010-03-092010-03-09Sighting tracking man-computer interaction method and device based on SOC (System On Chip)PendingCN101813976A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN 201010123009CN101813976A (en)2010-03-092010-03-09Sighting tracking man-computer interaction method and device based on SOC (System On Chip)

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN 201010123009CN101813976A (en)2010-03-092010-03-09Sighting tracking man-computer interaction method and device based on SOC (System On Chip)

Publications (1)

Publication NumberPublication Date
CN101813976Atrue CN101813976A (en)2010-08-25

Family

ID=42621247

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN 201010123009PendingCN101813976A (en)2010-03-092010-03-09Sighting tracking man-computer interaction method and device based on SOC (System On Chip)

Country Status (1)

CountryLink
CN (1)CN101813976A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103247019A (en)*2013-04-172013-08-14清华大学 Reconfigurable Device Based on AdaBoost Algorithm for Object Detection
CN103475893A (en)*2013-09-132013-12-25北京智谷睿拓技术服务有限公司Device and method for picking object in three-dimensional display
CN103559809A (en)*2013-11-062014-02-05常州文武信息科技有限公司Computer-based on-site interaction demonstration system
WO2014029245A1 (en)*2012-08-222014-02-27中国移动通信集团公司Terminal input control method and apparatus
CN103777351A (en)*2012-10-262014-05-07鸿富锦精密工业(深圳)有限公司Multimedia glasses
WO2014075418A1 (en)*2012-11-132014-05-22华为技术有限公司Man-machine interaction method and device
CN104253944A (en)*2014-09-112014-12-31陈飞Sight connection-based voice command issuing device and method
CN104968270A (en)*2012-12-112015-10-07阿米·克林 Systems and methods for detecting blink inhibition as a marker of engagement and perceived stimulus salience
US20160026242A1 (en)2014-07-252016-01-28Aaron BurnsGaze-based object placement within a virtual reality environment
CN105373766A (en)*2014-08-142016-03-02由田新技股份有限公司Pupil positioning method and device
WO2016176959A1 (en)*2015-05-042016-11-10惠州Tcl移动通信有限公司Multi-screen control method and system for display screen based on eyeball tracing technology
CN106528468A (en)*2016-10-112017-03-22深圳市紫光同创电子有限公司USB data monitoring apparatus, method and system
CN106990839A (en)*2017-03-212017-07-28张文庆A kind of eyeball identification multimedia player and its implementation
CN109788219A (en)*2019-01-182019-05-21天津大学 A high-speed CMOS image sensor readout scheme for human eye gaze tracking
US10311638B2 (en)2014-07-252019-06-04Microsoft Technology Licensing, LlcAnti-trip when immersed in a virtual reality environment
CN109960405A (en)*2019-02-222019-07-02百度在线网络技术(北京)有限公司Mouse operation method, device and storage medium
CN110072040A (en)*2019-04-222019-07-30东华大学A kind of image collecting device based on raspberry pie
US10451875B2 (en)2014-07-252019-10-22Microsoft Technology Licensing, LlcSmart transparency for virtual objects
US10649212B2 (en)2014-07-252020-05-12Microsoft Technology Licensing LlcGround plane adjustment in a virtual reality environment
CN114092985A (en)*2020-07-312022-02-25中移(苏州)软件技术有限公司 A terminal control method, device, terminal and storage medium
WO2024041488A1 (en)*2022-08-222024-02-29北京七鑫易维信息技术有限公司Electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1731418A (en)*2005-08-192006-02-08清华大学 Robust method for precise eye localization in complex background images
CN101344816A (en)*2008-08-152009-01-14华南理工大学 Human-computer interaction method and device based on gaze tracking and gesture recognition
CN101344919A (en)*2008-08-052009-01-14华南理工大学 Eye-tracking method and assistive system for the disabled using the method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1731418A (en)*2005-08-192006-02-08清华大学 Robust method for precise eye localization in complex background images
CN101344919A (en)*2008-08-052009-01-14华南理工大学 Eye-tracking method and assistive system for the disabled using the method
CN101344816A (en)*2008-08-152009-01-14华南理工大学 Human-computer interaction method and device based on gaze tracking and gesture recognition

Cited By (33)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103631365A (en)*2012-08-222014-03-12中国移动通信集团公司Terminal input control method and device
CN103631365B (en)*2012-08-222016-12-21中国移动通信集团公司A kind of terminal input control method and device
WO2014029245A1 (en)*2012-08-222014-02-27中国移动通信集团公司Terminal input control method and apparatus
CN103777351A (en)*2012-10-262014-05-07鸿富锦精密工业(深圳)有限公司Multimedia glasses
WO2014075418A1 (en)*2012-11-132014-05-22华为技术有限公司Man-machine interaction method and device
US9740281B2 (en)2012-11-132017-08-22Huawei Technologies Co., Ltd.Human-machine interaction method and apparatus
US12239444B2 (en)2012-12-112025-03-04Children's Healthcare Of Atlanta, Inc.Systems and methods for detecting blink inhibition as a marker of engagement and perceived stimulus salience
CN104968270A (en)*2012-12-112015-10-07阿米·克林 Systems and methods for detecting blink inhibition as a marker of engagement and perceived stimulus salience
CN103247019B (en)*2013-04-172016-02-24清华大学For the reconfigurable device based on AdaBoost algorithm of object detection
CN103247019A (en)*2013-04-172013-08-14清华大学 Reconfigurable Device Based on AdaBoost Algorithm for Object Detection
CN103475893A (en)*2013-09-132013-12-25北京智谷睿拓技术服务有限公司Device and method for picking object in three-dimensional display
CN103559809A (en)*2013-11-062014-02-05常州文武信息科技有限公司Computer-based on-site interaction demonstration system
CN103559809B (en)*2013-11-062017-02-08常州文武信息科技有限公司Computer-based on-site interaction demonstration system
CN106575153A (en)*2014-07-252017-04-19微软技术许可有限责任公司Gaze-based object placement within a virtual reality environment
CN106575153B (en)*2014-07-252020-03-27微软技术许可有限责任公司Gaze-based object placement within a virtual reality environment
US10649212B2 (en)2014-07-252020-05-12Microsoft Technology Licensing LlcGround plane adjustment in a virtual reality environment
US10451875B2 (en)2014-07-252019-10-22Microsoft Technology Licensing, LlcSmart transparency for virtual objects
US10416760B2 (en)2014-07-252019-09-17Microsoft Technology Licensing, LlcGaze-based object placement within a virtual reality environment
US20160026242A1 (en)2014-07-252016-01-28Aaron BurnsGaze-based object placement within a virtual reality environment
US10311638B2 (en)2014-07-252019-06-04Microsoft Technology Licensing, LlcAnti-trip when immersed in a virtual reality environment
CN105373766B (en)*2014-08-142019-04-23由田新技股份有限公司 Pupil positioning method and device
CN105373766A (en)*2014-08-142016-03-02由田新技股份有限公司Pupil positioning method and device
CN104253944A (en)*2014-09-112014-12-31陈飞Sight connection-based voice command issuing device and method
US10802581B2 (en)2015-05-042020-10-13Huizhou Tcl Mobile Communication Co., Ltd.Eye-tracking-based methods and systems of managing multi-screen view on a single display screen
WO2016176959A1 (en)*2015-05-042016-11-10惠州Tcl移动通信有限公司Multi-screen control method and system for display screen based on eyeball tracing technology
CN106528468A (en)*2016-10-112017-03-22深圳市紫光同创电子有限公司USB data monitoring apparatus, method and system
CN106990839A (en)*2017-03-212017-07-28张文庆A kind of eyeball identification multimedia player and its implementation
CN109788219B (en)*2019-01-182021-01-15天津大学High-speed CMOS image sensor reading method for human eye sight tracking
CN109788219A (en)*2019-01-182019-05-21天津大学 A high-speed CMOS image sensor readout scheme for human eye gaze tracking
CN109960405A (en)*2019-02-222019-07-02百度在线网络技术(北京)有限公司Mouse operation method, device and storage medium
CN110072040A (en)*2019-04-222019-07-30东华大学A kind of image collecting device based on raspberry pie
CN114092985A (en)*2020-07-312022-02-25中移(苏州)软件技术有限公司 A terminal control method, device, terminal and storage medium
WO2024041488A1 (en)*2022-08-222024-02-29北京七鑫易维信息技术有限公司Electronic device

Similar Documents

PublicationPublication DateTitle
CN101813976A (en)Sighting tracking man-computer interaction method and device based on SOC (System On Chip)
CN101344919A (en) Eye-tracking method and assistive system for the disabled using the method
CN101950355B (en)Method for detecting fatigue state of driver based on digital video
CN102402680B (en)Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN103677270B (en)A kind of man-machine interaction method based on eye-tracking
CN101322589B (en) Non-Contact Anthropometric Method for Apparel Design
CN101477631B (en) Method, device and human-computer interaction system for extracting target from image
CN108108684A (en)A kind of attention detection method for merging line-of-sight detection
CN101281646A (en) Real-time detection method of driver fatigue based on vision
CN101375796A (en)Real-time detection system of fatigue driving
CN102324166A (en)Fatigue driving detection method and device
CN105955465A (en)Desktop portable sight line tracking method and apparatus
CN111783702A (en)Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning
CN116977727A (en)Image processing-based gas combustion flame real-time monitoring method and system
CN108681403A (en) A car control method using eye-tracking
CN111091069A (en) A power grid target detection method and system guided by blind image quality assessment
CN105404866B (en)A kind of implementation method of multi-mode automatic implementation body state perception
CN116977270A (en)Online visual intelligent detection method for defects on whole surface of high-precision bearing
Jian-Nan et al.Key techniques of eye gaze tracking based on pupil corneal reflection
CN104331705A (en)Automatic detection method for gait cycle through fusion of spatiotemporal information
CN118230240A (en)Remote field personnel security violation detection method and terminal equipment
CN104715234A (en)Side view detecting method and system
CN111513671A (en) An evaluation method of glasses comfort based on eye images
JPH07311833A (en) Person face detection device
CN117523612A (en) A dense pedestrian detection method based on Yolov5 network

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
SE01Entry into force of request for substantive examination
C12Rejection of a patent application after its publication
RJ01Rejection of invention patent application after publication

Application publication date:20100825


[8]ページ先頭

©2009-2025 Movatter.jp