Pupil quick positioning method for color iris recognitionTechnical Field
The invention relates to the technical field of identity recognition, in particular to a pupil quick positioning method based on a colorful iris image in iris recognition.
Background
The human iris is a circular ring-shaped part surrounding a black pupil, surrounded by a white sclera. The iris recognition firstly needs to complete the positioning of the iris in the iris image, including the inner circle positioning and the outer circle positioning of the iris circular ring, and aims to extract the image of the iris circular ring part and lay a foundation for subsequent processing. The so-called inner circle location is to find out the center point and the boundary circle of the pupil, i.e. the inner boundary of the iris ring. The outer circle positioning is to find the outer boundary of the iris circle.
The conventional iris inner circle positioning is mainly to firstly carry out binarization transformation on a gray scale image, then select operators such as canny or sobel and the like to carry out edge detection on the binarized image so as to form corresponding edge detection data, and analyze and calculate the group of data to find out a circle meeting the conditions. Typically, these "circles" are obtained in the order of tens or even hundreds, and a relatively large number of calculations are required to filter out the inner circle of that true iris circle.
The traditional iris recognition technology is based on the gray image only with black and white two colors acquired by a special infrared camera (matched with a corresponding infrared light source). The color iris image can be captured under normal illumination by a camera arranged on a mobile terminal with mass use, such as a mobile phone, a tablet computer and the like. Therefore, the technology based on color iris image recognition should have a wider application range.
Disclosure of Invention
The invention utilizes the color space characteristic of the color iris image to carry out color space transformation processing on the image to obtain the binary image with obvious pupil characteristics and higher robustness, wherein other noise signals are basically shielded, so that the calculation amount of edge detection and the positioning of the inner circle of the iris is greatly reduced, and the efficiency of edge detection and the positioning and extraction of the inner circle of the iris is obviously improved.
The technical scheme adopted by the invention is as follows: the pupil quick positioning method for color iris recognition comprises the following processing steps:
p1, converting the color iris image into YCbCr color space image;
p2, separating the Cr component map from the YCbCr color space image to obtain a Cr map;
p3, carrying out binarization conversion on the Cr image to obtain a clear-edge binary image Cr _ bw image;
p4, performing edge detection on the Cr _ bw graph to obtain an edge detection graph Cr _ edge;
p5, filtering and extracting all circle fitting data from the edge detection graph Cr _ edge, and filtering and extracting the data of the inner circle of the iris: the coordinates and radius of the center point of the pupil.
Further, the specific processing procedure of the P1 for converting the color iris image into the YCbCr color space image is as follows:
p1-1, calculating the spatial dimensions of the RGB matrix of the color iris image: number of rows, number of columns, dimension;
p1-2, the spatial dimensions calculated according to the aforementioned P1-1: separating RGB matrixes to obtain three component matrixes of R component, G component and B component respectively;
p1-3, converting the RGB matrix of the color iris image to generate the luminance component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Y respectively, and adding a Y component correction value cor _ Y to all elements of each component for correction to obtain a brightness component Y matrix result of the YCbCr color space; the processing expression is as follows:
luminance component Y ═ (pa _ Yr + R + pa _ Yg + G + pa _ Yb + B) + cor _ Y;
p1-4, converting the RGB matrix of the color iris image to generate the blue component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Cb respectively, and adding a Cb component correction value cor _ Cb correction to all elements of each component to obtain a blue component Cb matrix result of the YCbCr color space; the processing expression is as follows:
blue component Cb ═ pa _ Cbr + pa _ Cbg × G + pa _ Cbb) + cor _ Cb;
p1-5, converting the RGB matrix of the color iris image to generate the red component of the YCbCr color space, the specific conversion process is: multiplying the R component, the G component and the B component obtained in P1-2 by the parameter vector pa _ Cr respectively, and adding Cr component correction value cor _ Cr to all elements of each component for correction to obtain a red component Cr matrix result of the YCbCr color space; the processing expression is as follows:
red component Cr ═ (pa _ Crr × R + pa _ Crg × G + pa _ Crb × B) + cor _ Cr;
and P1-6, combining the luminance component Y matrix, the blue component Cb matrix and the red component Cr matrix obtained by the P1-3, the P1-4 and the P1-5 to obtain a complete result of converting the RGB matrix into the YCbCr color space.
Specifically, the parameter vector pa _ Y of P1-3 is:
pa_Y=[pa_Yr,pa_Yg,pa_Yb]=[0.299,0.587,0.114]。
specifically, the parameter vector pa _ Cb of P1-4 is:
pa_Cb=[pa_Cbr,pa_Cbg,pa_Cbb]=[-0.1687,-0.3313,0.5]。
specifically, the parameter vector pa _ Cr of P1-5 is:
pa_Cr=[pa_Crr,pa_Crg,pa_Crb]=[0.5,-0.4187,-0.0813]。
specifically, the Y component correction value cor _ Y of P1-3 is 16.
Specifically, the Cb component correction value cor _ Cb of P1-4 is 128.
Specifically, the Cr component correction value cor _ Cr of P1-5 is 128.
Further, the specific processing procedure of the P3 converting the Cr map into the binary map Cr _ bw map by binarization is as follows:
p3-1, calculating the optimal segmentation threshold value of the iris pupil of the Cr image output by P2 as a foreground object, so as to maximize the variance between the iris pupil as the foreground object and the background, and to highlight the iris pupil object;
p3-2, carrying out image binarization segmentation on the Cr image by using the segmentation threshold obtained by P3-1, namely setting foreground pixels smaller than the segmentation threshold as '0' values and setting background pixels larger than or equal to the segmentation threshold as '255' values;
and P3-3, carrying out 0/1 binarization conversion on the binarization gray level map obtained by P3-2 to obtain a binary map Cr _ bw map.
Further, the iris pupil of the Cr map of P3-1 is used as the optimal segmentation threshold of the foreground object, and the specific calculation method is as follows:
p3-1-1, generating a histogram of a Cr map;
p3-1-2, histogram smoothing treatment is carried out on the histogram of the Cr map;
p3-1-3, calculating the maximum gray value and the minimum gray value of the histogram of the Cr image after the smoothing treatment, and taking the maximum gray value and the minimum gray value as the boundary values of the subsequent calculation;
p3-1-4, calculating the mass moment of each gray value, i.e. the value of each gray value multiplied by the number of pixels of the gray value;
p3-1-5, calculating the variance of the histogram of the Cr map under each gray level, namely the fluctuation range under the gray level;
and P3-1-6, filtering out the maximum variance value from the variance under each gray level, and taking the gray level corresponding to the variance value as the optimal segmentation threshold of the foreground object, wherein the iris pupil of the Cr image of P3-1 is taken as the gray level corresponding to the maximum variance value.
Further, the method for detecting the edge of the binary map Cr _ bw of P4 includes the following processes:
p4-1, carrying out image filtering processing on the binary image Cr _ bw to remove noise signals;
p4-2, calculating the amplitude and direction of the gradient of the binary image Cr _ bw after the image denoising processing is finished;
p4-3, carrying out non-maximum inhibition on the gradient amplitude;
p4-4, edge detection and connection using a dual threshold algorithm.
Further, the specific processing method of the P4-1 for performing image filtering processing on the binary image Cr _ bw is as follows:
p4-1-1, determining a proper filtering template, including the size and standard deviation coefficient;
p4-1-2, generating a filter mask matrix by using the filter template;
and P4-1-3, performing convolution calculation by using the filter mask matrix and the binary image Cr _ bw image matrix:
firstly, keeping the rows unchanged and the columns changed, and performing convolution operation in the horizontal direction;
secondly, on the obtained result, keeping the rows not to be limited, changing the rows, and performing convolution operation in the vertical direction;
and P4-1-4, removing abnormal element values exceeding the upper limit of the peak value in the image matrix of the binary image Cr _ bw after convolution calculation, and obtaining the binary image Cr _ bw with higher smoothness of a noise signal of filtering a single or isolated block in the image.
Further, the processing procedure of calculating the gradient, the gradient magnitude and the gradient direction of each pixel point in the Cr _ bw image of P4-2 is as follows:
p4-2-1, three zero-valued matrices of the same size as the Cr _ bw image matrix are built as follows:
x-direction gradient value matrix Ix (X, y)
② Y direction gradient value matrix Iy (x, Y)
③ gradient amplitude matrix M (x, y) of the target image matrix
P4-2-2, calculating the gradient of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient in x-direction of Cr _ bw (x, y) element: ix (x, y) ═ I (x +1, y) -I (x-1, y)
Y-directional gradient of Cr _ bw (x, y) element: iy (x, y) ═ I (x, y +1) -I (x, y-1)
P4-2-3, calculating the gradient amplitude M of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient amplitude of Cr _ bw (x, y) element
P4-2-4, calculating the gradient direction angle theta of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
the gradient direction angle θ (x, y) of the Cr _ bw (x, y) element is arctan ((Iy (x, y), Ix (x, y)).
Further, the P4-3 performs non-maximum suppression on the binary map Cr _ bw map, and the specific processing method thereof is as follows:
p4-3-1, establishing a zero value matrix K (x, y) of the same size as the Cr _ bw image matrix;
p4-3-2, reading all pixels of the gradient amplitude matrix M (x, y) in a circulating traversal mode, and judging whether the gradient value of the current pixel is 0 or not;
p4-3-3, if the gradient value of the current pixel of the gradient amplitude matrix M (x, y) is 0, assigning K (x, y) to the corresponding pixel of 0;
p4-3-4, if the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) is not 0, comparing the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) with the gradient of X and the gradient of Y of the adjacent pixels, screening out the pixel with the maximum value, giving M (X, Y), and assigning 0 to the other pixels with smaller values;
and P4-3-5, screening out the pixel value of the maximum value and assigning the current pixel of K (x, y) until all pixels are traversed to obtain a non-maximum value inhibition processing result K (x, y).
Further, the P4-4 uses a dual-threshold algorithm to detect and connect edges, and the specific processing method thereof is as follows:
3, selecting a proper high threshold value and a proper low threshold value according to the image;
4, circularly traversing all pixels of the binary image Cr _ bw after the non-maximum value is restrained;
3, if the gradient value of the current pixel is higher than the high threshold value, keeping;
4, if the gradient value of the current pixel is lower than the low threshold value, discarding;
and 5, if the gradient value of the current pixel is between the high threshold and the low threshold, searching pixel gradient values from adjacent pixels, if the pixel gradient values are higher than the high threshold, keeping the pixel gradient values, and if the pixel gradient values are not higher than the high threshold, discarding the pixel gradient values.
Further, the processing method for filtering and extracting the pupil boundary circle data of P5 includes: and filtering and extracting all circle fitting data from the edge detection image Cr _ edge, calculating an array with circle center coordinates and radius lengths of the circle fitting as elements one by one, filtering and extracting an iris and pupil positioning result meeting conditions from the array, and outputting the coordinates and the radius of a center point of the iris and pupil positioning result.
The invention has the beneficial effects that:
the pupil in the binary image Cr _ bw of the invention is quite prominent, and most of the time, only one circle corresponding to the pupil exists in the image, and the data filtering and extraction of the circle are quite simple and fast. Compared with the calculation and filtration of a large amount of 'circle' data in the traditional iris inner circle extraction process, the processing efficiency is obviously improved.
Drawings
FIG. 1 is a flow chart of a pupil fast positioning method for color iris recognition according to the present invention;
FIG. 2 is a schematic diagram of an example of the original input of a color iris image according to the present invention;
FIG. 3 is a diagram illustrating the result of converting a color iris image to YCbCr color space according to the present invention;
FIG. 4 is a diagram illustrating the result of Cr component extracted from YCbCr color space according to the present invention;
FIG. 5 is a diagram illustrating a binarization result of a Cr component map according to the present invention;
FIG. 6 is a schematic view of the processing flow of the Cr component map edge detection calculation according to the present invention;
FIG. 7 is a schematic diagram of the edge detection result of the Cr component diagram according to the present invention;
FIG. 8 is a schematic view of the processing flow of the image filtering process performed on the binary image Cr _ bw by color iris recognition according to the present invention;
FIG. 9 is a diagram illustrating a filtering template for image filtering according to the present invention;
FIG. 10 is a diagram illustrating a convolution calculation method of an image matrix and a filter template in an image filtering process according to the present invention;
FIG. 11 is a schematic diagram illustrating an output result of the image filtering process performed on the binary image Cr _ bw in the image filtering process according to the present invention;
FIG. 12 is a schematic view of the processing flow of gradient magnitude and direction detection for the binary map Cr _ bw according to the present invention;
FIG. 13 is a schematic diagram illustrating a processing result of detecting the gradient amplitude and the direction of the binary image Cr _ bw according to the present invention;
FIG. 14 is a schematic view of a process flow of non-maximum suppression of the binary map Cr _ bw according to the present invention;
FIG. 15 is a diagram illustrating the result of the non-maximum suppression process performed on the binary Cr _ bw map according to the present invention;
fig. 16 is a diagram illustrating the pupil location result according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the following embodiments and the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided for illustration only, and not for the purpose of limiting the invention as defined by the appended claims, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be construed as being included in the scope of the present invention.
Example 1: fig. 1 is a processing flow chart of the fast pupil location method for color iris recognition according to the present invention. As shown in fig. 1, the method for rapidly positioning a pupil by color iris recognition of the present invention comprises the following processing steps:
p1, converting the color iris image into YCbCr color space image;
p2, separating the Cr component map from the YCbCr color space image to obtain a Cr map;
p3, carrying out binarization conversion on the Cr image to obtain a binary image Cr _ bw image with higher edge definition;
p4, carrying out edge detection on the Cr _ bw image to obtain an edge detection image Cr _ edge;
p5, filtering and extracting all circle fitting data from the edge detection graph Cr _ edge, and filtering and extracting the data of the inner circle of the iris: the coordinates and radius of the center point of the pupil.
Example 2: referring to fig. 2 and 3, fig. 2 is a schematic diagram of an example of an original input of a color iris image, and fig. 3 is a schematic diagram of a result of converting the color iris image shown in fig. 2 into a YCbCr color space. As shown in fig. 2, the input of the present invention is a color iris image, and in this embodiment, the image is captured by a common smart phone widely used in the market at present, and an auxiliary illumination of a suitable visible light source is applied during shooting. As shown in fig. 3, the color iris image shown in fig. 2 is converted into a resultant diagram of a YCbCr color space. YCbCr is one of the color spaces commonly used in digital photography systems. An image of the YCbCr color space is composed of three components, including a Y component, a Cb component, and a Cr component. Where Y is the luminance (luma) component of the color, Cb is the density offset component of blue, and Cr is the density offset component of red.
Example 3: fig. 4 is a schematic diagram showing the result of extracting the Cr component from the YCbCr color space image shown in fig. 3, that is, the density offset component of red extracted from the YCbCr color space image according to the present invention.
Example 4: FIG. 5 is a binary image with clear edges obtained by the binary conversion of the Cr component image shown in FIG. 4 according to the present invention. As shown in fig. 5, after the binarization conversion, the obtained binary image shows a very clear "pupil" pattern, and other noise signals except for the pupil are basically eliminated.
Example 5: referring to fig. 6 and 7, the processing flow and the processing result of the edge detection calculation of the binary map Cr _ bw shown in fig. 5 according to the present invention are shown. Fig. 6 is a schematic flow chart of the process of performing the edge detection calculation on the binary image Cr _ bw map shown in fig. 5 according to the present invention, and fig. 7 is a schematic flow chart of the process of performing the edge detection calculation on the binary image Cr _ bw map shown in fig. 5 according to the present invention. Since the binary image shown in fig. 5 filters most of the noise patterns outside the pupil, the edge detection result is very simple and clear. The method for detecting the edge of the binary image Cr _ bw image comprises the following processes:
p4-1, carrying out image filtering processing on the binary image Cr _ bw to filter out noise signals;
p4-2, calculating the amplitude and direction of the gradient of the binary image Cr _ bw after the image filtering processing;
p4-3, carrying out non-maximum inhibition on the gradient amplitude;
p4-4, edge detection and connection using a dual threshold algorithm.
Example 6: see fig. 8, 9, 10 and 11. Fig. 8 is a schematic view showing a processing flow of the image filtering processing performed on the binary image Cr _ bw shown in fig. 5 according to the present invention, fig. 9 is a schematic view showing a filtering template according to the present invention, fig. 10 is a schematic view showing a convolution calculation method between an image matrix and the filtering template in the image filtering processing according to the present invention, and fig. 11 is a schematic view showing an output result of the image filtering processing performed on the binary image Cr _ bw in the image filtering processing according to the present invention. The image filtering process, also called smoothing filtering process, has two functions: firstly, smoothing the image and secondly eliminating the image noise. In the present embodiment, the main purpose of the image filtering process is to remove image noise. In the binary image Cr _ bw of the color iris image, due to the reason of the image itself, and after the conversion process of converting the color iris image into the YCbCr color space and performing binarization conversion on the Cr component image, some non-target objects such as small spots and even isolated pixels may exist in the pupil and the periphery of the pupil in the image. The basic idea of the treatment is as follows: a filtering mask template is introduced, and the filtering mask template and a filtering image are used for performing matrix convolution operation, so that an image noise signal can be eliminated, and the effect of enhancing the image edge to be clear can be achieved. The P4-1 carries out image filtering processing on the binary image Cr _ bw, and the specific processing method comprises the following steps:
in the first step, an appropriate filter mask template is determined, including its size, standard deviation coefficients. The filter mask template is a matrix, and in the embodiment, the filter mask template matrix adopts a 3 × 3 matrix.
And secondly, using the filter mask template as a filter mask matrix. In this embodiment, the filter mask template adopted by the filter mask template is shown in fig. 9 and is a matrix of 3 × 3 as shown below:
[(X-1,Y-1);(X-1,Y);(X-1,Y+1);
(X,Y-1);(X,Y);(X,Y+1);
(X+1,Y-1);(X+1,Y);(X+1,Y+1)]
=[0.075,0.124,0.075;
0.124,0.204,0.124;
0.075,0.124,0.075;]
and thirdly, performing convolution calculation by using the filtering mask matrix and the binary image Cr _ bw image matrix. Fig. 9 is a schematic diagram of the convolution calculation method according to the present invention. The calculation method is shown in the following expression:
(x,y)=(x-1,y-1)*(X-1,Y-1)+(x-1,y)*(X-1,Y)*(x-1,y+1)*(X-1,Y+1)
+(x,y-1)*(X,Y-1)+(x,y)*(X,Y)*(x,y+1)*(X,Y+1)
+(x+1,y-1)*(X+1,Y-1)+(x+1,y)*(X+1,Y)*(x+1,y+1)*(X+1,Y+1)
as shown in fig. 9, in this embodiment, the grayscale value of the Cr _ bw (x, y) ═ Cr _ bw (3,3) pixel is 38, and the value obtained by performing convolution operation on the grayscale value of this pixel is changed to 33 with reference to the following expression:
Cr_bw(3,3)=35*0.075+36*0.124+31*0.075
+31*0.124+38*0.204+28*0.124
+31*0.075+34*0.124+21*0.075
=33
the convolution calculation process is as follows:
firstly, keeping the rows unchanged and the columns changed, and performing convolution operation on all matrix elements of the rows in the horizontal direction;
secondly, changing the line number, and continuing the convolution operation in the first step until the convolution operation of all the lines is traversed;
and fourthly, removing abnormal element values exceeding the upper limit of the peak value in the binary image Cr _ bw image matrix after convolution calculation, and obtaining the binary image Cr _ bw with the image noise signal removed and the iris-pupil image edge gradient higher.
Example 7: see fig. 12 and 13. Fig. 12 is a schematic view showing a flow of a process of detecting a gradient amplitude and a direction of the binary map Cr _ bw shown in fig. 11 according to the present invention, and fig. 13 is a schematic view showing a result of the process of detecting a gradient amplitude and a direction of the binary map Cr _ bw shown in fig. 11 according to the present invention. In image recognition, the gradient direction of an image is the direction in which the function f (x, y) changes most quickly, when an edge exists in the image, a large gradient value exists, and conversely, when a smoother part exists in the image, the gray value change is small, and the corresponding gradient is also small. In this embodiment, the gradient of each pixel point in the Cr _ bw image is calculated, and the processing procedure of the gradient amplitude and the direction is as follows:
in a first step, three zero-valued matrices of the same size as the Cr _ bw image matrix are built as follows:
x-direction gradient value matrix Ix (X, y)
② Y direction gradient value matrix Iy (x, Y)
③ gradient amplitude matrix M (x, y) of the target image matrix
Secondly, calculating the gradient of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient in x-direction of Cr _ bw (x, y) element: ix (x, y) ═ I (x +1, y) -I (x-1, y)
Y-directional gradient of Cr _ bw (x, y) element: iy (x, y) ═ I (x, y +1) -I (x, y-1)
Thirdly, calculating the gradient amplitude M of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient amplitude of Cr _ bw (x, y) element
Step four, calculating the gradient direction angle theta of each pixel point in the Cr _ bw image, wherein the calculation method is shown in the following expression:
gradient direction angle θ (x, y) of Cr _ bw (x, y) element is arctan ((Iy (x, y), Ix (x, y))
Example 8: see fig. 14 and 15. Fig. 14 is a schematic flow chart illustrating a process of non-maximum suppression of the binary map Cr _ bw shown in fig. 13 according to the present invention, and fig. 15 is a schematic diagram illustrating a result of the process of non-maximum suppression of the binary map Cr _ bw shown in fig. 13 according to the present invention. In this embodiment, the non-maximum suppression processing is to find a maximum value on a gradient matrix generated by performing gradient calculation on the image Cr _ bw, and to eliminate element values other than the maximum value. The basic idea is as follows: and taking the calculated pixel as a reference center, observing the conditions of adjacent pixel points along the gradient direction of the point, determining and reserving the point with the maximum value according to the observation result, and rejecting the point with the non-maximum value. The processing procedure for carrying out non-maximum suppression on the binary image Cr _ bw image is as follows:
in a first step, a zero value matrix K (x, y) of the same size as the Cr bw image matrix is established as follows:
and step two, circularly traversing and reading all pixels of the gradient amplitude matrix M (x, y), and judging whether the gradient value of the current pixel is 0.
Thirdly, if the gradient value of the current pixel of the gradient amplitude matrix M (x, y) is 0, assigning a value K (x, y) to the corresponding pixel of 0;
fourthly, if the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) is not 0, comparing the gradient value of the current pixel of the gradient amplitude matrix M (X, Y) with the gradient values of X and Y of adjacent pixels, screening out the pixel with the maximum value as being greater than M (X, Y), and assigning 0 to the other pixels with smaller values;
and fifthly, screening out the pixel value of the maximum value and assigning the current pixel K (x, y) until all pixels are traversed to obtain a non-maximum value inhibition processing result K (x, y).
Example 9: fig. 16 is a schematic diagram showing an output result of the present invention after pupil location calculation is performed on the color iris image shown in fig. 2. As shown in fig. 16, the result of pupil location of the color iris image is calculated using the edge detection result data shown in fig. 15: the coordinates and the radius of the central point of the pupil, thereby quickly and accurately positioning the pupil position of the colored iris.