技术领域technical field
本发明涉及医疗设备领域,更具体地,涉及一种可靠的青光眼患者自我检测方法。The invention relates to the field of medical equipment, in particular to a reliable self-detection method for glaucoma patients.
背景技术Background technique
青光眼是由眼内压持续性增高导致的一种疾病,当内压过高时,会对人眼内部组织造成不可逆的损伤,在后期会导致失明。因此,必须要在早期进行治疗,以降低风险。在中国,还没普及每年对眼部进行一次检查。有一些青光眼患者早期视力很好,但一旦发现患了青光眼却已经是晚期了。Glaucoma is a disease caused by a persistent increase in intraocular pressure. When the internal pressure is too high, it will cause irreversible damage to the internal tissues of the human eye and cause blindness in the later stage. Therefore, early treatment is necessary to reduce the risk. In China, it is not yet popular to have an annual eye examination. Some glaucoma patients have good vision in the early stage, but once they are found to have glaucoma, it is already in the late stage.
目前,在市面上存在一个名为visualFieldseasy的免费、公益、检测青光眼的软件,有需要的用户可以随时随时使用其进行检测。这可以提高在早期就发现青光眼的机率,减少青光眼患者的失明可能,是对社会的一个伟大的贡献。软件visualFieldseasy以“青光眼的视野会变窄”作为检测青光眼的理论基础。visualFieldseasy在运行过程中,要求用户遮住其中一只眼睛,另一只眼睛注视向屏幕的一角。软件会在屏幕其它位置动态产生闪烁的信息点,并通过测试用户有无对这些信息点产生反应,进而确定用户的视野范围,判断用户是否可能患有青光眼。At present, there is a free, public welfare, glaucoma detection software called visualFieldseasy on the market, and users who need it can use it for detection at any time. This can increase the chances of glaucoma detection at an early stage and reduce the possibility of blindness in glaucoma patients, which is a great contribution to society. The software visualFieldseasy uses "the visual field of glaucoma will narrow" as the theoretical basis for detecting glaucoma. During the operation of visualFieldseasy, the user is required to cover one of the eyes and keep the other eye looking at the corner of the screen. The software will dynamically generate flashing information points at other positions on the screen, and by testing whether the user reacts to these information points, the user's field of vision can be determined and whether the user may suffer from glaucoma is determined.
visualFieldseasy存在一个严重的问题:某些检测数据有可能是无效的。由于检测过程占有一定的时间,使用者或多或少会受外界信息影响而导致视线未按照要求注视向红色小圆,此时出现的白色小圆的检测结果是无效的。无效的检测数据会对是否患有青光眼的判断带来影响,甚至会为青光眼患者给出一个没有患上青光眼的错误判断,导致患者错失治愈的时间。There is a serious problem with visualFieldseasy: some detection data may be invalid. Since the detection process takes a certain amount of time, the user will be more or less affected by external information and cause the sight to not focus on the red circle as required, and the detection result of the white circle that appears at this time is invalid. Invalid detection data will affect the judgment of glaucoma, and even give glaucoma patients a wrong judgment that they do not have glaucoma, causing patients to miss the healing time.
发明内容Contents of the invention
本发明克服visualFieldseasy使用时存在数据无效的问题,提供一种可靠的青光眼患者自我检测方法,以提高判断检测者是否患有青光眼的准确性。The invention overcomes the problem of invalid data when the visualFieldseasy is used, and provides a reliable self-testing method for glaucoma patients to improve the accuracy of judging whether the tester suffers from glaucoma.
为解决上述技术问题,本发明的技术方案如下:In order to solve the problems of the technologies described above, the technical solution of the present invention is as follows:
一种可靠的青光眼患者自我检测方法,包括以下步骤:A reliable self-diagnosis method for glaucoma patients, comprising the following steps:
S1:人脸定位:采集人脸图像并通过肤色分割识别出人脸区域,确定人脸边界并提取人脸;S1: Face positioning: collect face images and identify face areas through skin color segmentation, determine face boundaries and extract faces;
S2:人眼检测:在提取人脸后,对人眼区域进行识别;S2: Human eye detection: After extracting the human face, identify the human eye area;
S3:提取瞳孔中心-眼角位置矢量,判断视线方向是否发生了移动,并借助于提前标定各个主要方向的方法来实时根据瞳孔中心-眼角位置矢量的值预估实际视线方向,从而对青光眼进行检测。S3: Extract the pupil center-eye corner position vector, judge whether the direction of sight has moved, and use the method of pre-calibrating each main direction to estimate the actual line of sight direction in real time according to the value of the pupil center-eye corner position vector, so as to detect glaucoma .
在一种优选的方案中,步骤S1中,过肤色分割识别出人脸区域的具体方法为:在YCbCr空间根据三通道的数值将每个像素置黑或置白,其计算公式如下:In a preferred solution, in step S1, the specific method of identifying the face area through skin color segmentation is: in YCbCr space, according to the value of the three channels, each pixel is set black or white, and the calculation formula is as follows:
pow=m2+n2pow=m2 +n2
其中y、cb、cr指图像单个像素在Y通道、Cb通道、Cr通道的值;而value表示该像素点的二值化结果,若值为255则表示是肤色点,否则不是肤色点。Among them, y, cb, and cr refer to the value of a single pixel of the image in the Y channel, Cb channel, and Cr channel; and value indicates the binarization result of the pixel point. If the value is 255, it means that it is a skin color point, otherwise it is not a skin color point.
在一种优选的方案中,步骤S1中,在二值化结束后,给二值化图像施加一个腐蚀操作,用以取出部分背景噪声,且对人脸部分的结果影像不大。In a preferred solution, in step S1, after the binarization is completed, an erosion operation is applied to the binarized image to remove part of the background noise, and the resulting image of the face part is not large.
在一种优选的方案中,步骤S1中,确定人脸边界并提取人脸的具体步骤包括:In a preferred solution, in step S1, the specific steps of determining the boundary of a human face and extracting a human face include:
1)将肤色分割结果进行纵向投影,提取“高原”部分,由此确定人脸左右边界;1) Longitudinal projection of the skin color segmentation results to extract the "plateau" part, thereby determining the left and right boundaries of the face;
2)在肤色分割结果上,根据人脸左右边界进行切割;2) On the skin color segmentation result, cut according to the left and right boundaries of the face;
3)将新的分割结果进行横向灰度投影,提取“高原”部分,由此确定人脸上下边界。3) Transverse grayscale projection is performed on the new segmentation result to extract the "plateau" part, thereby determining the upper and lower boundaries of the face.
在一种优选的方案中,步骤S2中,对人眼区域进行识别的具体步骤为:In a preferred solution, in step S2, the specific steps of identifying the human eye area are:
1)在人脸肤色分割结果上调用区域检测算法,检测黑色区域部分;1) Invoke the area detection algorithm on the face skin color segmentation result to detect the black area;
2)对区域检测的结果进行适当的面积扩充,得到人眼候选区域集;2) Properly expand the area of the result of the area detection to obtain the human eye candidate area set;
3)对每个人眼候选区域不断截取一部分送入AdaBoost分类器中进行检测,直到检测到人眼或者整个候选区域并不包含人眼;3) Continuously intercept a part of each human eye candidate area and send it to the AdaBoost classifier for detection until human eyes are detected or the entire candidate area does not contain human eyes;
4)当检测到某候选区域中有人眼存在时,对检测结果判断其左右边界有无与肤色分割结果中的黑块相交,若有,则进行相应的面积扩充。4) When human eyes are detected in a candidate area, judge whether the left and right boundaries of the detection result intersect with the black block in the skin color segmentation result, and if so, expand the corresponding area.
在一种优选的方案中,步骤S3中,提取瞳孔中心-眼角位置矢量的具体步骤包括:In a preferred solution, in step S3, the specific steps of extracting the pupil center-eye corner position vector include:
S3.1:瞳孔中心定位:在对人眼进行二值化后,对图片进行竖向、横向的黑色像素点数量投影,通过竖向黑色像素点投影虹膜的左右边界,由此在人眼二值图中进行横向地截取虹膜成分,再对其进行横向黑色像素点数量投影,新的横向黑色像素点数量投影后,便可确定虹膜的上下边界,通过计算虹膜左右边界的中间值可以获得瞳孔中心点横坐标;通过计算虹膜上下边界的中间值可以获得瞳孔中心点的纵坐标;S3.1: Pupil center positioning: After binarizing the human eye, project the number of black pixels vertically and horizontally on the picture, and project the left and right borders of the iris through the vertical black pixels, so that The iris component is intercepted horizontally in the value map, and then the horizontal black pixel number projection is performed on it. After the new horizontal black pixel number projection, the upper and lower boundaries of the iris can be determined, and the pupil can be obtained by calculating the middle value of the left and right boundaries of the iris The abscissa of the center point; the ordinate of the pupil center point can be obtained by calculating the median value of the upper and lower boundaries of the iris;
S3.2:人眼自适应二值化:以虹膜面积为基本判据来自适应调节边缘强化模板的中心值,由此得到自适应的二值化结果,其执行步骤如下:S3.2: Adaptive binarization of the human eye: Adaptively adjust the center value of the edge enhancement template based on the iris area as the basic criterion, thereby obtaining an adaptive binarization result. The execution steps are as follows:
1)计算虹膜处黑色团块的像素个数num1;1) Calculate the number of pixels num1 of the black blob at the iris;
2)对人眼使用中心值为10.8的边缘强化模板处理,然后转成灰度图,并使用OTSU自平衡二值化得到二值图,此时计算整张二值图中黑色像素点的个数num2;2) For the human eye, use an edge enhancement template with a center value of 10.8, then convert it into a grayscale image, and use OTSU self-balancing binarization to obtain a binary image. At this time, calculate the number of black pixels in the entire binary image. number num2;
3)当num2>num1*1.4时,执行4),否则执行5);3) When num2>num1*1.4, execute 4), otherwise execute 5);
4)对人眼使用中心值为10的边缘强化模板处理,然后转成灰度图,并使用OTSU自平衡二值化得到二值图,并加以一个膨胀操作去除噪声;4) Use an edge enhancement template with a center value of 10 for the human eye, then convert it into a grayscale image, and use OTSU self-balancing binarization to obtain a binary image, and add an expansion operation to remove noise;
5)根据虹膜的上、下、左、右边界,将此处的二值图对应区域完全置黑,以出去环境明亮灯光在虹膜反射造成的影响。此时得到的即为期望的二值图效果;5) According to the upper, lower, left, and right boundaries of the iris, the corresponding area of the binary image here is completely blackened to eliminate the influence of the ambient bright light on the iris reflection. What is obtained at this time is the expected binary image effect;
S3.3:位置矢量归一化:在每次定位瞳孔中心后,以两个眼角点间的距离为基准进行归一化;S3.3: Normalization of the position vector: after each positioning of the pupil center, normalization is performed based on the distance between the two corners of the eyes;
假设检测得到左眼角坐标为(Lx,Ly),右眼角坐标为(Rx,Ry),瞳孔中心坐标(Cx,Cy),并以瞳孔中心相对于左眼角点的位置矢量作为视线判据,那么归一化后的位置矢量(Δx,Δy)可由以下求得:Assume that the coordinates of the left eye corner are (Lx , Ly ), the coordinates of the right eye corner are (Rx , Ry ), the coordinates of the pupil center are (Cx , Cy ), and the position vector of the pupil center relative to the left eye corner is As the line of sight criterion, the normalized position vector (Δx, Δy) can be obtained as follows:
由此,便通过(Δx,Δy)是否发生变化来判断视线方向是否发生了移动,并借助于提前标定各个主要方向的方法来实时根据(Δx,Δy)的值预估实际视线方向。Therefore, it is judged whether the line of sight direction has moved by whether (Δx, Δy) changes, and the actual line of sight direction is estimated in real time according to the value of (Δx, Δy) by means of the method of pre-calibrating each main direction.
与现有技术相比,本发明技术方案的有益效果是:本发明提供一种可靠的青光眼患者自我检测方法,人脸定位:采集人脸图像并通过肤色分割识别出人脸区域,确定人脸边界并提取人脸;人眼检测:在提取人脸后,对人眼区域进行识别;提取瞳孔中心-眼角位置矢量,判断视线方向是否发生了移动,并借助于提前标定各个主要方向的方法来实时根据瞳孔中心-眼角位置矢量的值预估实际视线方向,从而对青光眼进行检测。本发明为visualFieldseasy提供检测数据有效性的验证,以提高判断检测者是否患有青光眼的准确性。Compared with the prior art, the beneficial effects of the technical solution of the present invention are: the present invention provides a reliable self-detection method for glaucoma patients, face positioning: collect face images and identify face regions through skin color segmentation, determine face Boundary and face extraction; Human eye detection: After extracting the human face, identify the human eye area; extract the pupil center-eye corner position vector, judge whether the line of sight has moved, and use the method of pre-calibrating each main direction to detect In real time, the actual line of sight direction is estimated according to the value of the pupil center-eye corner position vector, so as to detect glaucoma. The invention provides the verification of the validity of detection data for visualFieldseasy, so as to improve the accuracy of judging whether a tester suffers from glaucoma.
附图说明Description of drawings
图1为可靠的青光眼患者自我检测方法的流程图。Figure 1 is a flowchart of a reliable self-diagnosis method for glaucoma patients.
图2为AdaBoost级联分类器的算法原理图。Figure 2 is the algorithm schematic diagram of AdaBoost cascade classifier.
图3为瞳孔中心-眼角矢量视线检测子系统的模块图。Fig. 3 is a block diagram of the pupil center-eye corner vector line of sight detection subsystem.
图4为检测结果以及“聚簇圆”示意图。Figure 4 is a schematic diagram of the detection results and the "cluster circle".
具体实施方式detailed description
下面结合附图和实施例对本发明的技术方案做进一步的说明。The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.
实施例1Example 1
如图1所示,一种可靠的青光眼患者自我检测方法,包括以下步骤:As shown in Figure 1, a reliable self-diagnosis method for glaucoma patients includes the following steps:
S1:人脸定位:采集人脸图像并通过肤色分割识别出人脸区域,确定人脸边界并提取人脸;S1: Face positioning: collect face images and identify face areas through skin color segmentation, determine face boundaries and extract faces;
肤色是人脸特征之一,不同种族的人群都可以保证各自在脸上的肤色集中、高度相近。通常情况下,背景与肤色相似的可能性不高,可以通过肤色将人脸与背景分隔出来。根据现有的研究成果,世界上不同种族的肤色在转换到YCbCr色彩空间后,在Cb-Cr空间中的特性基本保持一致,具有聚类特征。因此采用基于肤色的分割方法可以将人脸分割出来。Skin color is one of the characteristics of the face. People of different races can guarantee that their skin colors on the face are concentrated and highly similar. Usually, the background is unlikely to be similar to the skin color, and the face can be separated from the background by the skin color. According to the existing research results, after the skin color of different races in the world is converted to the YCbCr color space, the characteristics in the Cb-Cr space are basically consistent and have clustering characteristics. Therefore, the face can be segmented by using the segmentation method based on skin color.
通过肤色分割识别出人脸区域的具体方法为:在YCbCr空间根据三通道的数值将每个像素置黑或置白,其计算公式如下:The specific method of identifying the face area through skin color segmentation is: in the YCbCr space, set each pixel to black or white according to the value of the three channels. The calculation formula is as follows:
pow=m2+n2pow=m2 +n2
其中y、cb、cr指图像单个像素在Y通道、Cb通道、Cr通道的值;而value表示该像素点的二值化结果,若值为255则表示是肤色点,否则不是肤色点。Among them, y, cb, and cr refer to the value of a single pixel of the image in the Y channel, Cb channel, and Cr channel; and value indicates the binarization result of the pixel point. If the value is 255, it means that it is a skin color point, otherwise it is not a skin color point.
在具体实施过程中,在二值化结束后,给二值化图像施加一个腐蚀操作,用以取出部分背景噪声,且对人脸部分的结果影像不大。In the specific implementation process, after the binarization is completed, an erosion operation is applied to the binarized image to remove part of the background noise, and the resulting image of the face part is not large.
确定人脸边界并提取人脸的具体步骤包括:The specific steps of determining the boundary of the face and extracting the face include:
1)将肤色分割结果进行纵向投影,提取“高原”部分,由此确定人脸左右边界;1) Longitudinal projection of the skin color segmentation results to extract the "plateau" part, thereby determining the left and right boundaries of the face;
2)在肤色分割结果上,根据人脸左右边界进行切割;2) On the skin color segmentation result, cut according to the left and right boundaries of the face;
3)将新的分割结果进行横向灰度投影,提取“高原”部分,由此确定人脸上下边界。3) Transverse grayscale projection is performed on the new segmentation result to extract the "plateau" part, thereby determining the upper and lower boundaries of the face.
S2:人眼检测:在提取人脸后,对人眼区域进行识别,具体步骤为:S2: Human eye detection: After extracting the human face, identify the human eye area, the specific steps are:
1)在人脸肤色分割结果上调用区域检测算法,检测黑色区域部分;1) Invoke the area detection algorithm on the face skin color segmentation result to detect the black area;
2)对区域检测的结果进行适当的面积扩充,得到人眼候选区域集;初步候选的区域有可能会丢失眼睛的部分特征,通过面积扩充可以抵消这种丢失情况,提高人眼识别的成功率。2) Properly expand the area of the result of the area detection to obtain the human eye candidate area set; the preliminary candidate area may lose some features of the eye, and the area expansion can offset this loss and improve the success rate of human eye recognition .
3)如图2所示,对每个人眼候选区域不断截取一部分送入AdaBoost分类器中进行检测,直到检测到人眼或者整个候选区域并不包含人眼;3) As shown in Figure 2, a part of each human eye candidate area is continuously intercepted and sent to the AdaBoost classifier for detection until human eyes are detected or the entire candidate area does not contain human eyes;
在AdaBoost中,基分类器被依序训练,并且,每个基分类器都使用加权的数据集进行训练,其中每个数据点的权值都由先前的分类器执行效果所决定。如果一个数据点在前一个分类器中被误分,那么它在当前分类器中的权值会被加重;若一个数据点在前一个分类器中被正确地分类,那么它在当前分类器中的权值将会减少。In AdaBoost, the base classifiers are trained sequentially, and each base classifier is trained using a weighted data set, where the weight of each data point is determined by the performance of the previous classifier. If a data point is misclassified in the previous classifier, its weight in the current classifier will be increased; if a data point is correctly classified in the previous classifier, then it will be weighted in the current classifier The value of will decrease.
4)当检测到某候选区域中有人眼存在时,对检测结果判断其左右边界有无与肤色分割结果中的黑块相交,若有,则进行相应的面积扩充。这主要是为了解决AdaBoost定位不准的问题,可以通过黑块的相交性进行位置校准。4) When human eyes are detected in a candidate area, judge whether the left and right boundaries of the detection result intersect with the black block in the skin color segmentation result, and if so, expand the corresponding area. This is mainly to solve the problem of inaccurate positioning of AdaBoost, and position calibration can be performed through the intersection of black blocks.
S3:提取瞳孔中心-眼角位置矢量,其具体步骤包括:S3: Extracting pupil center-eye corner position vector, the specific steps include:
S3.1:瞳孔中心定位:在对人眼进行二值化后,对图片进行竖向、横向的黑色像素点数量投影,通过竖向黑色像素点投影虹膜的左右边界,由此在人眼二值图中进行横向地截取虹膜成分,再对其进行横向黑色像素点数量投影,新的横向黑色像素点数量投影后,便可确定虹膜的上下边界,通过计算虹膜左右边界的中间值可以获得瞳孔中心点横坐标;通过计算虹膜上下边界的中间值可以获得瞳孔中心点的纵坐标;S3.1: Pupil center positioning: After binarizing the human eye, project the number of black pixels vertically and horizontally on the picture, and project the left and right borders of the iris through the vertical black pixels, so that The iris component is intercepted horizontally in the value map, and then the horizontal black pixel number projection is performed on it. After the new horizontal black pixel number projection, the upper and lower boundaries of the iris can be determined, and the pupil can be obtained by calculating the middle value of the left and right boundaries of the iris The abscissa of the center point; the ordinate of the pupil center point can be obtained by calculating the median value of the upper and lower boundaries of the iris;
S3.2:人眼自适应二值化:以虹膜面积为基本判据来自适应调节边缘强化模板的中心值,由此得到自适应的二值化结果,其执行步骤如下:S3.2: Adaptive binarization of the human eye: Adaptively adjust the center value of the edge enhancement template based on the iris area as the basic criterion, thereby obtaining an adaptive binarization result. The execution steps are as follows:
1)计算虹膜处黑色团块的像素个数num1;1) Calculate the number of pixels num1 of the black blob at the iris;
2)对人眼使用中心值为10.8的边缘强化模板处理,然后转成灰度图,并使用OTSU自平衡二值化得到二值图,此时计算整张二值图中黑色像素点的个数num2;2) For the human eye, use an edge enhancement template with a center value of 10.8, then convert it into a grayscale image, and use OTSU self-balancing binarization to obtain a binary image. At this time, calculate the number of black pixels in the entire binary image. number num2;
3)当num2>num1*1.4时,执行4),否则执行5);3) When num2>num1*1.4, execute 4), otherwise execute 5);
4)对人眼使用中心值为10的边缘强化模板处理,然后转成灰度图,并使用OTSU自平衡二值化得到二值图,并加以一个膨胀操作去除噪声;4) Use an edge enhancement template with a center value of 10 for the human eye, then convert it into a grayscale image, and use OTSU self-balancing binarization to obtain a binary image, and add an expansion operation to remove noise;
5)根据虹膜的上、下、左、右边界,将此处的二值图对应区域完全置黑,以出去环境明亮灯光在虹膜反射造成的影响。此时得到的即为期望的二值图效果;5) According to the upper, lower, left, and right boundaries of the iris, the corresponding area of the binary image here is completely blackened to eliminate the influence of the ambient bright light on the iris reflection. What is obtained at this time is the expected binary image effect;
S3.3:位置矢量归一化:在每次定位瞳孔中心后,以两个眼角点间的距离为基准进行归一化;S3.3: Normalization of the position vector: after each positioning of the pupil center, normalization is performed based on the distance between the two corners of the eyes;
假设检测得到左眼角坐标为(Lx,Ly),右眼角坐标为(Rx,Ry),瞳孔中心坐标(Cx,Cy),并以瞳孔中心相对于左眼角点的位置矢量作为视线判据,那么归一化后的位置矢量(Δx,Δy)可由以下求得:Assume that the coordinates of the left eye corner are (Lx , Ly ), the coordinates of the right eye corner are (Rx , Ry ), the coordinates of the pupil center are (Cx , Cy ), and the position vector of the pupil center relative to the left eye corner is As the line of sight criterion, the normalized position vector (Δx, Δy) can be obtained as follows:
由此,便通过(Δx,Δy)是否发生变化来判断视线方向是否发生了移动,并借助于提前标定各个主要方向的方法来实时根据(Δx,Δy)的值预估实际视线方向。Therefore, it is judged whether the line of sight direction has moved by whether (Δx, Δy) changes, and the actual line of sight direction is estimated in real time according to the value of (Δx, Δy) by means of the method of pre-calibrating each main direction.
实施例2Example 2
图3为瞳孔中心-眼角矢量视线检测子系统的模块图。由图可知本系统在处理输入图片上主要分为4个模块:人脸定位模块、人眼定位模块、提取位置矢量模块、判断视线有无变化模块。Fig. 3 is a block diagram of the pupil center-eye corner vector line of sight detection subsystem. It can be seen from the figure that the system is mainly divided into four modules in processing input images: face positioning module, human eye positioning module, position vector extraction module, and judging whether there is a line of sight change module.
人脸定位模块负责在输入的彩色图像中提取出人脸部分。这一部分主要是通过基于YCbCr空间的肤色分割方法完成。The face location module is responsible for extracting the face part from the input color image. This part is mainly completed by the skin color segmentation method based on YCbCr space.
人眼定位模块负责在筛选出的人脸部分上圈定人眼。这一部分主要是通过使用AdaBoost分类器完成。The human eye positioning module is responsible for delineating human eyes on the selected human face. This part is mainly done by using AdaBoost classifier.
提取瞳孔中心-眼角位置矢量模块主要是根据提取的人眼图片获得归一化的瞳孔中心-眼角位置矢量。这一部分主要是通过本文提出的干涉型混合投影获得瞳孔中心点以及结合自适应二值化和射线探测的方法定位两个眼角。The module of extracting the pupil center-eye corner position vector mainly obtains the normalized pupil center-eye corner position vector according to the extracted human eye picture. This part is mainly to obtain the pupil center point through the interferometric hybrid projection proposed in this paper and to locate the two corners of the eyes by combining adaptive binarization and ray detection.
判断视线有无变化的模块主要是将此处提取得到的位置矢量与之前积累的位置矢量进行对比,判断有无明显变化。The module for judging whether the line of sight has changed is mainly to compare the position vector extracted here with the previously accumulated position vector to judge whether there is any obvious change.
当判断视线有明显变化时,便可确定该时刻人眼并没有根据软件要求看向指定位置,故在对应时间点上的青光眼检测数据点不可信,需要剔除或让visualFieldseasy主系统重新在该点进行检测。When it is judged that there is a significant change in the line of sight, it can be determined that the human eye is not looking at the specified position according to the software requirements at this moment, so the glaucoma detection data point at the corresponding time point is not credible, and needs to be eliminated or the main system of visualFieldseasy should be re-set at this point to test.
visualFieldseasy主系统与本系统之间需要一个协议,以使得本系统更好辅助检测青光眼,为visualFieldseasy的检测数据提供可靠性判据。There needs to be an agreement between the main system of visualFieldseasy and this system, so that this system can better assist in the detection of glaucoma, and provide reliability criteria for the detection data of visualFieldseasy.
协议内容:visualFieldseasy主系统在检测得到某个数据点时,需要将该时刻获得的用户人脸图片传至本系统;本系统将图片进行处理后,返回一个true/false的值给visualFieldseasy主系统,表示该时刻的检测数据可信/不可信。Protocol content: when the visualFieldseasy main system detects a certain data point, it needs to transmit the user's face picture obtained at that moment to the system; after the system processes the picture, it returns a true/false value to the visualFieldseasy main system. Indicates that the detection data at this moment is credible/unreliable.
由于本发明PCCV算法的高效性,用户只需要按照要求依序注视屏幕的一角1~2秒即可获得大量的位置矢量。又由于PCCV的准确率高于96%,由此可分别计算得到用户注视屏幕四个角落时的有效的平均位置矢量。Due to the high efficiency of the PCCV algorithm of the present invention, the user only needs to watch a corner of the screen sequentially for 1-2 seconds as required to obtain a large number of position vectors. And because the accuracy rate of PCCV is higher than 96%, the effective average position vectors when the user looks at the four corners of the screen can be calculated respectively.
人脸定位模块的主要功能在于定位人脸位置,避免复杂背景对后续的人眼定位造成干扰。这部分先将输入的彩色照片转到YCbCr空间,然后根据肤色分割理论将转换后的图片进行二值化,之后对二值图进行竖向和横向的白色像素点数量投影,由此确定人脸区域。The main function of the face positioning module is to locate the position of the face and avoid the interference of the complex background on the subsequent human eye positioning. This part first transfers the input color photo to the YCbCr space, then binarizes the converted image according to the skin color segmentation theory, and then performs vertical and horizontal projection of the number of white pixels on the binary image to determine the face area.
人眼定位模块的主要功能是在人脸彩色图片上确定人眼位置。这里只需要确定一只眼睛即可。这部分了提高效率,需要将人脸彩色图片转换到YCbCr空间并根据肤色分割进行二值化,然后通过对黑色团块进行区域分割的方法确定候选人眼区域。若在二值图中先用膨胀操作剔除一些背景噪声,将提高区域分割的效率,并且可以缩小候选人眼区域的数量。之后将候选人眼区域对应的彩色图部分放入AdaBoost分类器中确定真正的人眼。这里的分类器在定位上有一定的不准,需要在它的定位结果中结合二值图进行区域扩展。The main function of the human eye positioning module is to determine the human eye position on the color picture of the human face. Only one eye needs to be identified here. In order to improve efficiency, it is necessary to convert the color image of the face into the YCbCr space and perform binarization according to the skin color segmentation, and then determine the candidate eye area by segmenting the black blob. If the expansion operation is used to remove some background noise in the binary image, the efficiency of region segmentation will be improved, and the number of candidate eye regions can be reduced. Then put the color map part corresponding to the candidate eye area into the AdaBoost classifier to determine the real human eye. The classifier here is somewhat inaccurate in positioning, and needs to be combined with the binary image in its positioning results for region expansion.
提取瞳孔中心-眼角位置矢量的模块的主要功能在于获得位置矢量。在提取瞳孔中心的过程中,首先使用中心值为12、边界值为-1的3*3模板对人眼彩色图片进行边缘强化以及非线性亮度提升;然后将处理后的彩色图片转换成灰度图,并使用OTSU完成二值化,由此可获得“干净”的虹膜黑色团块,并且眼睛的其它部位基本被剔除;然后对二值图采用竖向投影确定虹膜的左右边界,由此干涉二值图的横向投影,进而确定虹膜的上下边界,并以虹膜的中心作为瞳孔的中心点。在定位眼角的过程中,首先使用边界值为-1、中心值可变的3*3模板对人眼彩色图片进行边缘强化以及非线性亮度提升,再转成灰度图以及使用OTSU完成二值化,由此得到自适应的二值化结果;一般在自适应二值化结果中,只留下虹膜和眼角的黑色团块,其它成分会被剔除,此时采用射线探测的方法可以确定眼角的位置。在确定瞳孔中心和眼角的位置后,便可计算得到瞳孔中心相对于其中一个眼角的位置矢量,但为了提高这个位置矢量的时间前后可比较性,需要以两个眼角间的距离为依据进行归一化。The main function of the module for extracting the pupil center-eye corner position vector is to obtain the position vector. In the process of extracting the pupil center, first use a 3*3 template with a center value of 12 and a boundary value of -1 to perform edge enhancement and non-linear brightness enhancement on the human eye color picture; then convert the processed color picture into grayscale Figure, and use OTSU to complete the binarization, so that "clean" iris black clumps can be obtained, and other parts of the eye are basically eliminated; then use vertical projection to determine the left and right boundaries of the iris, thereby interfering The horizontal projection of the binary image determines the upper and lower boundaries of the iris, and the center of the iris is used as the center point of the pupil. In the process of locating the corner of the eye, first use the 3*3 template with a boundary value of -1 and a variable center value to perform edge enhancement and non-linear brightness enhancement on the color image of the human eye, then convert it into a grayscale image and use OTSU to complete the binary value In this way, an adaptive binarization result is obtained; generally, in the adaptive binarization result, only the black clumps of the iris and the corner of the eye are left, and other components will be eliminated. At this time, the method of ray detection can be used to determine the corner of the eye s position. After determining the positions of the pupil center and the eye corners, the position vector of the pupil center relative to one of the eye corners can be calculated. One.
判断视线有无变化的模块需要将此帧彩色输入图片中提取得到的位置矢量与之前统计得到的位置矢量相比较,确定位置矢量有无明显变化,由此确定视线有无变化。当视线变化时,应当提示青光眼检测软件主系统在该时刻的检测数据不可信,需要剔除,或者重新进行该测试点的检测。The module for judging whether the line of sight has changed needs to compare the position vector extracted from this frame of color input picture with the position vector obtained from the previous statistics to determine whether there is any obvious change in the position vector, thereby determining whether there is a change in the line of sight. When the line of sight changes, it should prompt the main system of the glaucoma detection software that the detection data at that moment is not credible, and needs to be eliminated, or the detection of the test point should be performed again.
人们单只眼睛在前方的视野范围可近似为一个矩形。因此,我们将人的视野范围切割为49个区域,制作成测试模板。在测试的过程中,我们让测试人员一次注视其中的9个对称的彩色区域,通过瞳孔中心-眼角矢量检测子系统检测出测试者的实时的瞳孔中心-眼角位置矢量数据。The field of vision of people's single eye in the front can be approximated as a rectangle. Therefore, we cut the human visual field into 49 areas and made them into test templates. During the test, we let the tester look at the 9 symmetrical colored areas at a time, and detect the real-time pupil center-eye corner position vector data of the tester through the pupil center-eye corner vector detection subsystem.
由于人在长时间注视一点时,注意力会不可避免地在该点周边的一定区域内发散,由此导致本子系统求得的位置矢量形成一个聚簇。为了区分视线的两种状态:人眼在指定区域视线发散和人眼注视向其它区域,需要计算位置矢量所形成聚簇的边界。由此,当求得的位置矢量在指定注视方向的聚簇以内时,则认为人眼有按照要求注视向指定方向;否则认为人眼此时并未按照要求注视向指定方向,此时的某个visualFieldseasy测试数据不可信,需要剔除或者重新测试该数据点。通过计算聚簇内位置矢量的平均值以及聚簇半径得到“聚簇圆”。检测结果以及“聚簇圆”如图4所示。表1为本系统方法视线预测效果。When people stare at a point for a long time, their attention will inevitably diverge in a certain area around the point, which causes the position vector obtained by this subsystem to form a cluster. In order to distinguish the two states of the line of sight: the divergence of the human eye in the specified area and the gaze of the human eye to other areas, it is necessary to calculate the boundary of the cluster formed by the position vector. Therefore, when the obtained position vector is within the cluster of the specified gaze direction, it is considered that the human eye is gazing in the specified direction as required; otherwise, it is considered that the human eye is not gazing in the specified direction as required at this time. A visualFieldseasy test data is unreliable and needs to be removed or retested. The "cluster circle" is obtained by calculating the mean value of the position vectors within the cluster and the cluster radius. The detection results and the "cluster circle" are shown in Figure 4. Table 1 shows the line-of-sight prediction effect of this system method.
表1Table 1
图4中,将没有在对应的“聚簇圆”内的彩色点视为误判点,对测试过程的误判的分析结果表1所示,由以上数据可看出本方法能够高准确度地判断人眼视线方向,由此使用本方法对青光眼检测数据提供有效性判据是切实可行的。In Figure 4, the colored points that are not in the corresponding "cluster circle" are regarded as misjudgment points, and the analysis results of misjudgment during the test process are shown in Table 1. From the above data, it can be seen that this method can achieve high accuracy. Therefore, it is feasible to use this method to provide validity criteria for glaucoma detection data.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710139010.7ACN106934365A (en) | 2017-03-09 | 2017-03-09 | A kind of reliable glaucoma patient self-detection method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710139010.7ACN106934365A (en) | 2017-03-09 | 2017-03-09 | A kind of reliable glaucoma patient self-detection method |
| Publication Number | Publication Date |
|---|---|
| CN106934365Atrue CN106934365A (en) | 2017-07-07 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710139010.7APendingCN106934365A (en) | 2017-03-09 | 2017-03-09 | A kind of reliable glaucoma patient self-detection method |
| Country | Link |
|---|---|
| CN (1) | CN106934365A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108921227A (en)* | 2018-07-11 | 2018-11-30 | 广东技术师范学院 | A kind of glaucoma medical image classification method based on capsule theory |
| CN109086713A (en)* | 2018-07-27 | 2018-12-25 | 腾讯科技(深圳)有限公司 | Eye recognition method, apparatus, terminal and storage medium |
| CN109480808A (en)* | 2018-09-27 | 2019-03-19 | 深圳市君利信达科技有限公司 | A kind of heart rate detection method based on PPG, system, equipment and storage medium |
| CN110598635A (en)* | 2019-09-12 | 2019-12-20 | 北京大学第一医院 | Method and system for face detection and pupil positioning in continuous video frames |
| CN110969084A (en)* | 2019-10-29 | 2020-04-07 | 深圳云天励飞技术有限公司 | A method, device, readable storage medium and terminal device for detecting an area of interest |
| CN113239754A (en)* | 2021-04-23 | 2021-08-10 | 泰山学院 | Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles |
| CN116563901A (en)* | 2022-01-22 | 2023-08-08 | 北京眼神智能科技有限公司 | Eye positioning method, device, storage medium and equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108921227A (en)* | 2018-07-11 | 2018-11-30 | 广东技术师范学院 | A kind of glaucoma medical image classification method based on capsule theory |
| CN108921227B (en)* | 2018-07-11 | 2022-04-08 | 广东技术师范学院 | Glaucoma medical image classification method based on capsule theory |
| CN109086713A (en)* | 2018-07-27 | 2018-12-25 | 腾讯科技(深圳)有限公司 | Eye recognition method, apparatus, terminal and storage medium |
| CN109086713B (en)* | 2018-07-27 | 2019-11-15 | 腾讯科技(深圳)有限公司 | Eye recognition method, apparatus, terminal and storage medium |
| CN109480808A (en)* | 2018-09-27 | 2019-03-19 | 深圳市君利信达科技有限公司 | A kind of heart rate detection method based on PPG, system, equipment and storage medium |
| CN110598635A (en)* | 2019-09-12 | 2019-12-20 | 北京大学第一医院 | Method and system for face detection and pupil positioning in continuous video frames |
| CN110969084A (en)* | 2019-10-29 | 2020-04-07 | 深圳云天励飞技术有限公司 | A method, device, readable storage medium and terminal device for detecting an area of interest |
| CN110969084B (en)* | 2019-10-29 | 2021-03-05 | 深圳云天励飞技术有限公司 | Method and device for detecting attention area, readable storage medium and terminal equipment |
| CN113239754A (en)* | 2021-04-23 | 2021-08-10 | 泰山学院 | Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles |
| CN116563901A (en)* | 2022-01-22 | 2023-08-08 | 北京眼神智能科技有限公司 | Eye positioning method, device, storage medium and equipment |
| Publication | Publication Date | Title |
|---|---|---|
| CN106934365A (en) | A kind of reliable glaucoma patient self-detection method | |
| US12114929B2 (en) | Retinopathy recognition system | |
| JP4845698B2 (en) | Eye detection device, eye detection method, and program | |
| US11877800B2 (en) | Method and system for detecting blepharoptosis | |
| US7657086B2 (en) | Method and apparatus for automatic eyeglasses detection using a nose ridge mask | |
| CN107209933A (en) | For assessing retinal images and the method and system of information being obtained from retinal images | |
| CN108615051A (en) | Diabetic retina image classification method based on deep learning and system | |
| Punnolil | A novel approach for diagnosis and severity grading of diabetic maculopathy | |
| CN107506770A (en) | Diabetic retinopathy eye-ground photography standard picture generation method | |
| CN103942539A (en) | Method for accurately and efficiently extracting human head ellipse and detecting shielded human face | |
| CN110428421A (en) | Method and device for macular image region segmentation | |
| CN102867179A (en) | Method for detecting acquisition quality of digital certificate photo | |
| Jindal et al. | Cataract detection using digital image processing | |
| Farooq et al. | Improved automatic localization of optic disc in Retinal Fundus using image enhancement techniques and SVM | |
| US20230346276A1 (en) | System and method for detecting a health condition using eye images | |
| CN106203338B (en) | Human eye state method for quickly identifying based on net region segmentation and threshold adaptive | |
| CN105447450A (en) | Method and device for judging left iris and right iris in iris recognition | |
| Gao et al. | Automatic pterygium detection on cornea images to enhance computer-aided cortical cataract grading system | |
| CN118196218B (en) | Fundus image processing method, device and equipment | |
| Malek et al. | Automated optic disc detection in retinal images by applying region-based active aontour model in a variational level set formulation | |
| CN110598635A (en) | Method and system for face detection and pupil positioning in continuous video frames | |
| US10617294B1 (en) | System and method for determining the spherical power of eyes based on measured refractive error | |
| CN117523633A (en) | An automatic recognition method for ocular ptosis based on image deep learning | |
| Singh et al. | Assessment of disc damage likelihood scale (DDLS) for automated glaucoma diagnosis | |
| Tamilarasi et al. | Template matching algorithm for exudates detection from retinal fundus images |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication | Application publication date:20170707 |