技术领域technical field
本发明涉及生物特征识别技术、图像识别、深度学习领域,尤其涉及一种基 于Center Loss损失函数的指静脉识别方法。The present invention relates to the fields of biological feature recognition technology, image recognition, and deep learning, in particular to a finger vein recognition method based on the Center Loss loss function.
背景技术Background technique
信息技术的快速发展给人们生活带来巨大便捷的同时信息安全问题也显得 日益突出。由于人的生物特征难以被复制而且不会被遗失,有着更高的稳定性和 安全性。近年来,生物特征识别在身份认证以及信息安全领域得到了广泛应用。 生物特征主要包含:指纹、人脸、虹膜、指静脉等。虹膜识别技术采集成本高, 设备难以小型化,通过仪器直接照射人的眼睛,用户体验差,给采集工作带来了 一定障碍。人脸识别技术虽然易于采集,但对孪生双胞胎无法进行识别,而且人 脸特征不稳定,可以通过化妆、整容等方式进行伪造,降低准确性。指纹识别技 术通过触摸方式进行识别,对于环境要求高,手指潮湿、磨损会造成无法识别的 效果,而且指纹痕迹容易留存,存在被复制的可能性,降低了安全性。While the rapid development of information technology has brought great convenience to people's lives, information security issues have become increasingly prominent. Since human biological characteristics are difficult to be copied and will not be lost, it has higher stability and security. In recent years, biometric identification has been widely used in the fields of identity authentication and information security. Biometric features mainly include: fingerprints, faces, irises, finger veins, etc. The collection cost of iris recognition technology is high, the equipment is difficult to miniaturize, and the user experience is poor because the equipment is directly irradiated to people's eyes, which brings certain obstacles to the collection work. Although face recognition technology is easy to collect, it cannot identify twins, and the facial features are unstable, so it can be forged through makeup, plastic surgery, etc., reducing the accuracy. Fingerprint recognition technology uses touch to identify, which has high environmental requirements. Wet and worn fingers will cause unrecognizable effects, and fingerprint traces are easy to remain, which may be copied, which reduces security.
静脉识别是一种新兴的生物识别技术。人类手指内部分布着很多血管脉络, 在生物医学中,光谱在700nm到900nm范围的近红外线光可以穿透手指皮肤被 静脉血液中的血红蛋白吸收,导致近红外线在静脉部分的投射较少,而手指肌肉、 骨骼和其他部分都被弱化,从而形成指静脉血管影响图像。医学研究表明,每个 人的手指静脉血管纹路都是独一无二的,即使在同卵双胞胎的情况下,甚至在个 体的不同手指之间也是如此,并且每个人的静脉特征具有成年后持久不变的特 点,所以它能够唯一确定一个人的身份。指静脉是非接触式识别,采集方便,设 备成本低,用户接受度高,并且隐藏于人体手指内部,基于活体识别的条件,不 存在泄漏以及仿造的可能。Vein recognition is an emerging biometric technology. There are many blood vessels and veins distributed inside human fingers. In biomedicine, near-infrared light with a spectrum in the range of 700nm to 900nm can penetrate the finger skin and be absorbed by hemoglobin in venous blood, resulting in less projection of near-infrared rays in the veins, while fingers Muscles, bones, and other parts are weakened, resulting in affected images of blood vessels in the finger veins. Medical research shows that the pattern of vein veins in each person's fingers is unique, even in the case of identical twins, and even between the different fingers of an individual, and that each person's vein characteristics have characteristics that persist into adulthood , so it can uniquely identify a person. Finger veins are non-contact recognition, easy to collect, low equipment cost, high user acceptance, and hidden inside the human finger, based on the conditions of living body recognition, there is no possibility of leakage and counterfeiting.
指静脉具有安全性高、准确率高、唯一性、非接触性、样本文件小等特征。 近年来,指静脉识别技术已经成为了生物特征识别领域的研究热点。指静脉识别 技术主要通过以下几个步骤进行识别:指静脉图像的ROI提取,图像增强,特 征提取以及匹配。其中ROI提取可以有效去除指静脉图像中的背景干扰及噪声, 提取出静脉特征清晰的区域。图像增强算法用于增强图像中的静脉纹路,提高静 脉相对背景的对比度,减少噪声对图像的影响。特征提取算法通过传统算法或者 深度学习的方法将图像的特征用特征向量进行表达。匹配算法可以对两个待匹配 的特征向量进行相似度度量,判断出两个待匹配的指静脉是否属于同一个人。Finger veins have the characteristics of high security, high accuracy, uniqueness, non-contact, and small sample files. In recent years, finger vein recognition technology has become a research hotspot in the field of biometric recognition. The finger vein recognition technology mainly recognizes through the following steps: ROI extraction of finger vein images, image enhancement, feature extraction and matching. Among them, the ROI extraction can effectively remove the background interference and noise in the finger vein image, and extract the region with clear vein features. The image enhancement algorithm is used to enhance the vein pattern in the image, improve the contrast of the vein relative to the background, and reduce the influence of noise on the image. The feature extraction algorithm expresses the features of the image with feature vectors through traditional algorithms or deep learning methods. The matching algorithm can measure the similarity of two feature vectors to be matched, and judge whether the two finger veins to be matched belong to the same person.
目前亟待解决的问题有:指静脉采集器采集的图片质量普遍不高,传统算法 提取的指静脉特征不稳定,容易出现伪静脉等。At present, the problems that need to be solved are: the quality of the pictures collected by the finger vein collector is generally not high, the characteristics of the finger veins extracted by the traditional algorithm are unstable, and false veins are prone to appear.
发明内容Contents of the invention
本发明的目的在于克服现有技术的不足,本发明提供了一种基于Center Loss 损失函数的指静脉识别方法,采用神经网络提取指静脉特征,利用损失函数学习 判别性特征,使得特征向量类内聚合,类间分离,面对新的指静脉类别时,不需 要重新训练网络,采用向量间的欧氏距离作为匹配的标准。The purpose of the present invention is to overcome the deficiencies of the prior art. The present invention provides a finger vein recognition method based on the Center Loss loss function, using a neural network to extract finger vein features, using a loss function to learn discriminative features, so that the feature vector class Aggregation, separation between classes, no need to retrain the network when facing a new finger vein category, and the Euclidean distance between vectors is used as the matching standard.
本发明提供了一种基于Center Loss损失函数的指静脉识别方法,具体方案 如下:The present invention provides a kind of finger vein recognition method based on Center Loss loss function, and concrete scheme is as follows:
A、连接指静脉采集设备,采集静指脉图像;A. Connect the finger vein collection device to collect finger vein images;
B、对静指脉图像进行旋转校正,确定感兴趣区域ROI,提取ROI图像;B. Perform rotation correction on the finger vein image, determine the ROI of the region of interest, and extract the ROI image;
C、采用Resnet网络模型用于提取ROI图像的特征向量,联合监督信号作 为损失函数,优化网络模型参数,得到训练好的参数文件;C, adopt Resnet network model to be used for extracting the feature vector of ROI image, joint supervisory signal is as loss function, optimizes network model parameter, obtains the well-trained parameter file;
D、加载Resnet网络模型,读取训练好的参数文件,将步骤B得到的ROI 图像输入Resnet网络模型中,得到每一张静指脉图像对应的特征向量,并进行 归一化处理,将特征向量转变成单位特征向量归一化处理的目的是为了将任 意两个特征向量之间的距离限定在特定范围内,两个单位特征向量之间的最大距 离为2,最小距离为0;D. Load the Resnet network model, read the trained parameter file, input the ROI image obtained in step B into the Resnet network model, obtain the feature vector corresponding to each vein finger vein image, and perform normalization processing, and feature Convert vectors to unit eigenvectors The purpose of normalization processing is to limit the distance between any two eigenvectors within a specific range, the maximum distance between two unit eigenvectors is 2, and the minimum distance is 0;
E、将步骤D得到的单位特征向量作为指静脉的注册模板存储到指静脉数 据库中,对待识别的指静脉图像基于欧式距离进行检索识 别。E. The unit eigenvector obtained in step D Stored in the finger vein database as a registration template for finger veins middle, The finger vein image to be recognized is retrieved and recognized based on the Euclidean distance.
进一步的,所述步骤A具体为:Further, the step A is specifically:
A1、将指静脉采集器与客户机相连,在客户机安装采集器的驱动程序;A1. Connect the finger vein collector to the client computer, and install the driver of the collector on the client computer;
A2、根据客户端界面的指令,指静脉采集器采集指静脉图像;A2. According to the instructions of the client interface, the finger vein collector collects finger vein images;
A3、记指关节方向为x轴方向,手指指尖的方向为y轴正方向。A3. Note that the direction of the knuckle is the direction of the x-axis, and the direction of the fingertip is the positive direction of the y-axis.
进一步的,所述步骤B具体为:Further, the step B is specifically:
B1、对步骤A采集到的静指脉图像进行高斯去噪,去除噪声干扰;B1. Gaussian denoising is performed on the venous finger vein image collected in step A to remove noise interference;
B2、对去除噪声干扰后的指静脉图像进行边缘检测,采用Sobel算子求x轴 方向梯度,获得边缘检测的灰度图,并利用二值化去除噪声,提取手指轮廓线;B2, carry out edge detection to the finger vein image after removing noise interference, adopt Sobel operator to seek x-axis direction gradient, obtain the gray scale image of edge detection, and utilize binarization to remove noise, extract finger contour line;
B3、采用Hilditch算法对手指轮廓线进行细线化处理,得到细线化之后的手 指轮廓线;B3, adopt Hilditch algorithm to carry out thinning process to finger outline, obtain the finger outline after thinning;
B4、细线化处理之后的手指轮廓线除了有轮廓线之外还包含很多干扰直线, 需要消除干扰直线的影响,进一步去除细线化之后的手指轮廓线中的干扰直线, 得到单像素的手指轮廓线;B4. The finger outline after thinning processing contains many interfering straight lines in addition to the outline. It is necessary to eliminate the influence of interfering straight lines, further remove the interfering straight lines in the thinned finger outline, and obtain a single-pixel finger outline;
B5、通过单像素的手指轮廓线拟合中位线,中位线与垂直方向的夹角α作 为旋转校正的角度;B5. Fit the median line through the single-pixel finger contour line, and the angle α between the median line and the vertical direction is used as the angle of rotation correction;
B6、对单像素的手指轮廓线按照夹角α进行旋转校正,将手指轮廓线的垂 直方向内切线的宽度作为分割指静脉图像的最大宽度W;B6, carry out rotation correction according to included angle α to the finger contour line of single pixel, the width of the tangent line in the vertical direction of the finger contour line is used as the maximum width W of segmented finger vein image;
B7、对指静脉图像按照夹角α进行旋转校正,并利用手指轮廓线的垂直方 向内切线对指静脉图像进行分割,获得内切线分割图像;B7. Carry out rotation correction to the finger vein image according to the included angle α, and utilize the vertical direction intangent line of the finger contour line to segment the finger vein image to obtain the intangent line segmentation image;
B8、利用内切线分割图像中的每一列的像素灰度值分布曲线中峰值所出现 的位置作为横切线的位置,确定感兴趣区域ROI,在内切线分割图像中提取ROI 图像。B8. Use the position of the peak in the pixel gray value distribution curve of each column in the inscribed line segmented image as the position of the transverse line to determine the ROI of the region of interest, and extract the ROI image from the inscribed line segmented image.
进一步的,所述步骤C具体为:Further, the step C is specifically:
C1、建立Resnet网络模型,初始化网络参数;在Resnet网络模型的全连接 层之前加入一层全连接层;C1, set up the Resnet network model, initialize the network parameters; add a layer of fully connected layer before the fully connected layer of the Resnet network model;
C2、将Softmax Loss损失函数和Center Loss损失函数融合,作为Resnet网 络模型的损失函数,Center Loss损失函数使得类内特征聚合,Softmax Loss损失 函数使得类间特征分离;将步骤C1加入的全连接层输出的特征向量用于更新 Center Loss损失函数,引入超参数λ均衡两个损失函数;C2. Combine the Softmax Loss loss function and the Center Loss loss function as the loss function of the Resnet network model. The Center Loss loss function enables the aggregation of intra-class features, and the Softmax Loss loss function enables the separation of inter-class features; the fully connected layer added to step C1 The output feature vector is used to update the Center Loss loss function, and the hyperparameter λ is introduced to balance the two loss functions;
C3、将步骤B提取的ROI图像输入到改进损失函数的Resnet网络模型中, 由于网络用于匹配的特征向量是来自Resnet网络模型倒数第二层的全连接层, 修改该层参数进行降维,减小特征向量的存储空间以及匹配时间,得到训练好的 Resnet网络模型参数文件。C3. Input the ROI image extracted in step B into the Resnet network model of the improved loss function. Since the feature vector used by the network for matching is a fully connected layer from the second-to-last layer of the Resnet network model, modify the parameters of this layer for dimensionality reduction. Reduce the storage space and matching time of the feature vector, and obtain the trained Resnet network model parameter file.
进一步的,所述步骤E具体为:Further, the step E is specifically:
E1、采集待识别的指静脉图像,根据步骤B提取待识别的ROI图像并输入 步骤D中的Resnet网络模型中,得到待匹配的单位特征向量;E1, collect the finger vein image to be identified, extract the ROI image to be identified according to step B and input it in the Resnet network model in step D, obtain the unit feature vector to be matched;
E2、在识别阶段,将步骤E1提取的待匹配的单位特征向量与指静脉数据库中的已注册的单位特征向量依次计算欧氏距离;若待匹配的单位特征向量 与已注册的单位特征向量的欧式距离小于阈值,则判断为匹配成功;E2, in the identification stage, the unit feature vector to be matched extracted in step E1 and the finger vein database The registered unit eigenvectors in Calculate the Euclidean distance in turn; if the unit eigenvector to be matched and the registered unit eigenvector The Euclidean distance of is less than the threshold, then it is judged that the matching is successful;
E3、若待匹配的单位特征向量未存储在指静脉数据库中,则可选择进入注 册阶段,将步骤E1得到的单位特征向量写入指静脉数据库中,重复步骤E2。E3, if the unit feature vector to be matched is not stored in the finger vein database, you can choose to enter the registration stage, write the unit feature vector obtained in step E1 into the finger vein database, and repeat step E2.
与现有技术相比,本发明具有的有益效果是:与现有技术相比,本发明采用 了基于Center Loss损失函数的指静脉识别方法。首先,本发明提出了一种指静 脉图像校正及ROI区域提取的方法,可以消除手指旋转造成的干扰,提取出静 脉明显的区域。同时利用用卷积神经网络提取指静脉特征,采用Center Loss和 Softmax Loss损失函数作为联合监督信号,Softmax Loss损失函数使类间特征分 离,Center Loss损失函数使类内特征聚合,使得网络学习的特征向量具有良好的 判别力,面对未知类别时,仍然可以做出有效的判断。在不降低模型准确率的前 提下,修改网络全连接层参数降低特征向量的输出维度,减少特征模板的存储空 间以及匹配时的计算时间。对特征向量进行单位化操作,使得输出的特征向量均 转化为单位向量,用以限定模板匹配的距离范围,便于匹配时阈值的选取。所以 本发明对于传统指静脉识别方法来说是一个技术的突破。Compared with the prior art, the beneficial effect that the present invention has is: compared with the prior art, the present invention has adopted the finger vein recognition method based on Center Loss loss function. First of all, the present invention proposes a method for finger vein image correction and ROI region extraction, which can eliminate interference caused by finger rotation and extract obvious vein regions. At the same time, the convolutional neural network is used to extract the finger vein features, and the Center Loss and Softmax Loss loss functions are used as the joint supervision signal. The Softmax Loss loss function separates the inter-class features, and the Center Loss function aggregates the intra-class features, so that the characteristics of the network learning The vector has good discriminative power, and it can still make effective judgments when facing unknown categories. Under the premise of not reducing the accuracy of the model, modify the parameters of the fully connected layer of the network to reduce the output dimension of the feature vector, reduce the storage space of the feature template and the calculation time of matching. The unitization operation is performed on the eigenvectors, so that the output eigenvectors are converted into unit vectors, which is used to limit the distance range of template matching and facilitate the selection of thresholds during matching. Therefore, the present invention is a technological breakthrough for traditional finger vein recognition methods.
附图说明Description of drawings
图1是本方法的步骤图;Fig. 1 is the step figure of this method;
图2是指静脉校正及ROI提取流程图;Fig. 2 refers to the flowchart of vein correction and ROI extraction;
图3指静脉校正的示意图;Figure 3 is a schematic diagram of finger vein correction;
图4是利用指静脉垂直内切线截取示意图;Fig. 4 is a schematic diagram of interception using the vertical inner tangent of the finger vein;
图5是利用指静脉横截线截取示意图;Fig. 5 is a schematic diagram of interception using a finger vein cross-section;
图6是网络结构图;Fig. 6 is a network structure diagram;
图7是指静脉匹配流程图。Fig. 7 refers to the flow chart of vein matching.
具体实施方式Detailed ways
以下结合附图和具体实施对本发明进行详细描述,但不作为对本发明的限 定。The present invention is described in detail below in conjunction with accompanying drawing and specific implementation, but not as limiting the present invention.
如图1,本方法的实施步骤如下:As shown in Figure 1, the implementation steps of this method are as follows:
A、连接指静脉采集设备,进行指静脉图像采集A. Connect the finger vein collection device for finger vein image collection
将客户机与指静脉采集设备相连接,并在客户机上安装设备所需要的驱动程 序。将手指按照采集设备要求放置在相应的位置,手指自然放置,等待设备采集 指静脉图像。Connect the client computer with the finger vein collection device, and install the driver required by the device on the client computer. Place the finger in the corresponding position according to the requirements of the acquisition device, place the finger naturally, and wait for the device to collect the finger vein image.
B、对指静脉图像进行旋转校正及ROI截取提取感兴趣区域B. Perform rotation correction on the finger vein image and ROI interception to extract the region of interest
如图2所示的指静脉校正及ROI提取流程图以及图3所示的指静脉校正示 意图,首先对采集到的图3(a)所示的指静脉图片进行高斯去噪,去除噪声干扰, 得到高斯滤波后的图3(b);对手指静脉进行边缘检测,由于指静脉图像水平方向 梯度变化剧烈,采用Sobel算子求x轴方向梯度,获得边缘检测的灰度图,得到 边缘提取后的图3(c),并利用二值化去除噪声提取手指轮廓线,得到图3(d);对 手指轮廓线进行细线化处理,对二值化图像提取骨架,采用Hilditch算法进行细 线化处理,如图3(e)所示,图中的线条均变成单像素;细线化处理之后的图片除 了有轮廓线之外还包含很多干扰直线,需要消除干扰直线的影响,只保留手指轮 廓线;手指轮廓线之间的静脉纹路区域由于梯度变化平缓,梯度检测之后均变成背景(像素点为0的区域);利用这个特点,对静脉图像的每一行从中间区域分 别向左右两边出发,遇到的第一个像素值为255的像素便为轮廓线的像素点,遍 历所有行,保留符合要求的像素点,这些像素点集就是手指轮廓线的像素点,如 图3(f)所示,干扰线得以去除;进一步如图3(g)所示,通过单像素的手指轮廓线 拟合中位线,中位线与垂直方向的夹角α作为旋转校正的角度,对去噪之后的灰 度图利用角度α进行校正,得到图3(h)。The flow chart of finger vein correction and ROI extraction shown in Figure 2 and the schematic diagram of finger vein correction shown in Figure 3, firstly, Gaussian denoising is performed on the collected finger vein pictures shown in Figure 3(a) to remove noise interference, Figure 3(b) after Gaussian filtering is obtained; the edge detection of the finger vein is performed, because the gradient of the finger vein image in the horizontal direction changes drastically, the Sobel operator is used to calculate the gradient of the x-axis direction, and the gray image of the edge detection is obtained. Fig. 3(c), and use binarization to remove noise to extract the finger contour line, and get Fig. 3(d); the finger contour line is thinned, and the skeleton is extracted from the binarized image, and the Hilditch algorithm is used to thin the line As shown in Figure 3(e), the lines in the figure become single pixels; the image after thinning processing contains many interfering straight lines in addition to contour lines, and it is necessary to eliminate the influence of interfering straight lines, and only keep Finger contour line; the vein pattern area between the finger contour lines changes gently due to the gradient, and after the gradient detection, it becomes the background (the area where the pixel point is 0); using this feature, each line of the vein image is divided from the middle area to the left and right Starting from both sides, the first pixel encountered with a pixel value of 255 is the pixel point of the contour line, traverse all rows, and keep the pixels that meet the requirements. These pixel point sets are the pixel points of the finger contour line, as shown in Figure 3 ( As shown in f), the interference line is removed; further shown in Figure 3(g), the median line is fitted through the single-pixel finger contour line, and the angle α between the median line and the vertical direction is used as the angle of rotation correction. The grayscale image after denoising is corrected by angle α, and Figure 3(h) is obtained.
利用手指轮廓线的垂直方向内切线获得ROI区域的宽度w,如图4所示。 首先对图4(a)中的去除多余线条后的细线化的图片按照夹角α进行旋转校正,得 到如图4(b)轮廓线的垂直方向内切线,两个内切线之间的距离作为ROI区域的 最大宽度W,如图4(c)。进一步提取ROI图,其步骤如图5所示,指骨关节间 间隙中滑膜液的密度要远低于指骨的密度,相对于手指的其他部分,更多红外光 能够穿透指骨关节,使得红外成像后指骨关节区域的图像亮度较高,如图5(a) 所示。利用所述的轮廓线的垂直方向内切线截取图5(a)中的高亮部分,得到图5(b),利用图像中的每一列的像素灰度值和的分布曲线中峰值所出现的位置作为 指静脉横切线的位置h。利用指静脉横切线的位置h对图5(b)进行ROI截取,得 到图5(d)。The width w of the ROI region is obtained by using the vertical inscribed line of the finger contour line, as shown in FIG. 4 . Firstly, the thinned picture in Figure 4(a) after removing redundant lines is rotated and corrected according to the included angle α, and the vertical inscribed line of the contour line shown in Figure 4(b) is obtained, and the distance between the two inscribed lines As the maximum width W of the ROI area, as shown in Figure 4(c). Further extract the ROI map, and the steps are shown in Figure 5. The density of the synovial fluid in the space between the phalanx joints is much lower than that of the phalanx. Compared with other parts of the finger, more infrared light can penetrate the phalanx joints, making the infrared After imaging, the image brightness of the phalanx joint area is higher, as shown in Figure 5(a). Utilize the vertical inward tangent of the contour line to intercept the highlighted part in Fig. 5 (a), obtain Fig. 5 (b), utilize the pixel gray value of each column in the image and the distribution curve of the peak value to appear The position is taken as the position h of the transverse line of the finger vein. Use the position h of the finger vein transverse line to intercept the ROI in Figure 5(b) to obtain Figure 5(d).
C、采用Resnet作为网络模型,联合监督信号作为损失函数,提取指静脉特征C. Use Resnet as the network model, and the joint supervision signal as the loss function to extract finger vein features
简单的神经网络模型不能满足日益复杂的识别任务,这就需要深层的网络学 习更高级的特征。但是随着神经网络的加深,训练集的准确率会下降。这种现象 并不是由于过拟合造成,而是由于网络深度的增加,准确率会变得饱和,然后迅 速退化。为了解决这个问题,Resnet网络模型引入残差结构来解决准确率退化问 题。Resnet提出了两种将映射,分别是身份映射表示为x,图中为弯曲曲线部分, 残差映射表示为F(x)=H(x)–x,图中为除了弯曲曲线的部分,则期望的原始映 射重新表示为F(x)+x,该公式可以通过具有快捷连接的前馈神经网络实现,快 捷连接方式是跳过一个或多个层的连接。通过这两种映射提供了两种选择,用以 解决随着网络加深而准确率下降的部分,当网络的准确率已经达到最优,若继续 加深网络,残差映射将被置0,只剩下身份映射,这样网络理论上一直处于最优 状态,网络的准确率不会随着深度增加而降低。传统算法提取的指静脉特征不稳 定,容易出现伪静脉,因此采用神经网络对图片提取特征。在指静脉识别过程中, 预先采集所有可能的样本是不切实际的,若采用经典的CNN模型只能基于已有 的类别进行判断,无法对一个新的类别进行识别,因此需要采用度量学习。为了 减小特征向量的类内距离,引入centerloss损失函数增强神经网络中特征的判别 能力。center loss损失函数使神经网络学习一个特征向量中心,用作每个类的特 征属性。在训练过程的每个批次更新该中心并最小中心与其相应的类别特征向量 之间的距离,缩小类内距离。采用softmax loss和center loss损失作为联合监督 信号训练神经网络。softmax loss使类间可分,center loss使类内聚合,增强了特 征向量的判别能力。采用resnet18作为基础网络,加载resnet18的预训练模型初 始化网络参数,将Center Loss损失函数和Softmax Loss损失函数作为联合监督 信号训练网络。为更新Center Loss损失函数,需要在网络全连接层之前加入一 层全连接层,将该全连接层输出的特征向量用于更新Center Loss。将步骤B提 取ROI的指静脉图片输入神经网络中,采用和分类网络一样的训练方式,将网 络的倒数第二层全连接层的输出作为指静脉的特征向量用于距离度量,结构见图6。The simple neural network model cannot meet the increasingly complex recognition tasks, which requires a deep network to learn more advanced features. But as the neural network deepens, the accuracy of the training set will drop. This phenomenon is not due to overfitting, but due to the increase of network depth, the accuracy rate will become saturated and then degrade rapidly. In order to solve this problem, the Resnet network model introduces a residual structure to solve the problem of accuracy degradation. Resnet proposes two kinds of mapping, respectively, the identity map is represented as x, the curved curve part is represented in the figure, and the residual map is represented as F(x)=H(x)–x, and the part except the curved curve is represented in the figure, then The desired original mapping is re-expressed as F(x)+x, which can be implemented by a feed-forward neural network with shortcut connections that skip connections in one or more layers. These two mappings provide two options to solve the part where the accuracy rate decreases with the deepening of the network. When the accuracy rate of the network has reached the optimum, if the network continues to be deepened, the residual mapping will be set to 0, leaving only Lower identity mapping, so that the network is always in an optimal state theoretically, and the accuracy of the network will not decrease as the depth increases. Finger vein features extracted by traditional algorithms are unstable and prone to false veins, so neural networks are used to extract features from pictures. In the process of finger vein recognition, it is impractical to collect all possible samples in advance. If the classic CNN model is used, it can only judge based on the existing categories, and cannot identify a new category. Therefore, metric learning is required. In order to reduce the intra-class distance of the feature vector, a center loss loss function is introduced to enhance the discriminative ability of the features in the neural network. The center loss loss function makes the neural network learn a feature vector center, which is used as the feature attribute of each class. In each batch of the training process, the center is updated and the distance between the center and its corresponding category feature vector is minimized to reduce the intra-class distance. A neural network is trained using softmax loss and center loss as a joint supervision signal. The softmax loss makes the classes separable, and the center loss makes the aggregation within the class, which enhances the discriminative ability of the feature vector. Resnet18 is used as the basic network, the pre-training model of resnet18 is loaded to initialize network parameters, and the Center Loss loss function and Softmax Loss loss function are used as the joint supervision signal to train the network. In order to update the Center Loss loss function, it is necessary to add a fully connected layer before the fully connected layer of the network, and the feature vector output by the fully connected layer is used to update the Center Loss. Input the finger vein picture of the ROI extracted in step B into the neural network, adopt the same training method as the classification network, and use the output of the penultimate fully connected layer of the network as the feature vector of the finger vein for distance measurement. The structure is shown in Figure 6 .
D、修改网络倒数第二层全连接层参数进行降维并对特征向量进行归一化D. Modify the parameters of the penultimate fully connected layer of the network to reduce the dimension and normalize the feature vector
为了解决从ResNet训练模型中提取出指静脉特征的维数灾难问题,本文需 要对输出特征向量降维。由于网络用于匹配的特征向量是来自网络倒数第二层的 全连接层,修改该层参数进行降维,减小特征向量的存储空间以及匹配时间。经 过多次实验对比,将网络的输出维度设置在256维时模型的准确率最高。指静脉 采用欧式距离进行度量匹配,为了将任意两个向量之间的距离限定在特定范围 内,需要对降维后的特征向量进行归一化操作,使得每一张图片对应的特征向量 转变成单位向量。任意两个单位向量之间的最大距离为2,最小距离为0,因此 匹配时阈值的范围可以确定在[0,2]之间。In order to solve the dimensionality disaster problem of extracting finger vein features from the ResNet training model, this paper needs to reduce the dimensionality of the output feature vector. Since the feature vector used by the network for matching is from the fully connected layer of the penultimate layer of the network, the parameters of this layer are modified to reduce the dimensionality and reduce the storage space of the feature vector and the matching time. After many experiments and comparisons, the accuracy of the model is the highest when the output dimension of the network is set to 256 dimensions. Finger veins use Euclidean distance for metric matching. In order to limit the distance between any two vectors within a specific range, it is necessary to perform a normalization operation on the feature vector after dimensionality reduction, so that the feature vector corresponding to each picture is transformed into unit vector. The maximum distance between any two unit vectors is 2, and the minimum distance is 0, so the range of the threshold value during matching can be determined between [0,2].
E、将降维后的特征向量在指静脉数据库中基于欧式距离进行检索识别E. Retrieve and identify the feature vector after dimensionality reduction based on Euclidean distance in the finger vein database
如图4所示,待识别图像在指静脉数据库中的检索识别分为采集、提取特征 向量、注册并存入数据库、匹配4个步骤。首先通过所述的步骤A采集手指的 静脉图像,并进一步的根据步骤B-步骤D中的算法对采集到的静脉图像进行处 理并提取特征向量。在注册阶段,将写入到数据库中。在识别阶段,按顺 序取出的所有静脉特征向量,并与中的向量分别计算欧式距离,相似度 最高的静脉特征向量作为识别结果输出过程,步骤见图7。As shown in Figure 4, the retrieval and recognition of the image to be recognized in the finger vein database is divided into four steps: acquisition, feature vector extraction, registration and storage in the database, and matching. Firstly, the vein image of the finger is collected through the above-mentioned step A, and the collected vein image is further processed and the feature vector is extracted according to the algorithm in step B-step D. During the registration phase, the write to database middle. In the identification phase, the order is taken out All vein eigenvectors of , and with Euclidean distances are calculated for the vectors in , and the vein feature vector with the highest similarity is used as the recognition result output process. The steps are shown in Figure 7.
实验结果:Experimental results:
实验1,在FV-USM数据集上采用本发明所述的旋转校正及ROI截取方法 进行预处理,将处理之后的数据集送入Resnet模型中训练,损失函数采用Softmax Loss和CenterLoss,实验准确率为97.74%,AUC为99.71%;In experiment 1, the rotation correction and ROI interception methods described in the present invention were used for preprocessing on the FV-USM data set, and the processed data set was sent to the Resnet model for training. The loss function used Softmax Loss and CenterLoss, and the experimental accuracy rate is 97.74%, AUC is 99.71%;
实验2,采用未经处理的FV-SUM数据集送入同一个网络中训练,实验准确 率为92.34%,AUC为97.54%;In experiment 2, the unprocessed FV-SUM data set was sent to the same network for training, the experimental accuracy rate was 92.34%, and the AUC was 97.54%;
实验3,采用已经处理之后的数据集送入Resnet模型中训练,损失函数采用Softmax Loss,实验准确率为97.28%,AUC为99.58%。In experiment 3, the processed data set was sent to the Resnet model for training, and the loss function was Softmax Loss. The experimental accuracy was 97.28%, and the AUC was 99.58%.
由实验1和实验2对比可知,FV-USM数据集预处理后较预处理前的准确率 提升了5.40%,AUC提高了1.17%,验证了本发明的旋转校正及ROI截取算法 的有效性;实验1和实验3对比可知,引入Center Loss损失函数之后,FV-USM 准确率提升0.46%,AUC提升了0.13%,验证了Center Loss损失函数增加了指 静脉特征的判别力。From the comparison of Experiment 1 and Experiment 2, it can be seen that the accuracy rate of the FV-USM data set after preprocessing is increased by 5.40% compared with that before preprocessing, and the AUC is increased by 1.17%, which verifies the effectiveness of the rotation correction and ROI interception algorithm of the present invention; The comparison between Experiment 1 and Experiment 3 shows that after introducing the Center Loss loss function, the accuracy of FV-USM increased by 0.46%, and the AUC increased by 0.13%, which verified that the Center Loss loss function increased the discriminative power of finger vein features.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910694163.7ACN110555380A (en) | 2019-07-30 | 2019-07-30 | Finger vein identification method based on Center Loss function |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910694163.7ACN110555380A (en) | 2019-07-30 | 2019-07-30 | Finger vein identification method based on Center Loss function |
| Publication Number | Publication Date |
|---|---|
| CN110555380Atrue CN110555380A (en) | 2019-12-10 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910694163.7APendingCN110555380A (en) | 2019-07-30 | 2019-07-30 | Finger vein identification method based on Center Loss function |
| Country | Link |
|---|---|
| CN (1) | CN110555380A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111368747A (en)* | 2020-03-06 | 2020-07-03 | 上海掌腾信息科技有限公司 | System and method for realizing palm vein characteristic correction processing based on TOF technology |
| CN111639558A (en)* | 2020-05-15 | 2020-09-08 | 圣点世纪科技股份有限公司 | Finger vein identity verification method based on ArcFace Loss and improved residual error network |
| CN111950454A (en)* | 2020-08-12 | 2020-11-17 | 辽宁工程技术大学 | A Finger Vein Recognition Method Based on Bidirectional Feature Extraction |
| CN112580590A (en)* | 2020-12-29 | 2021-03-30 | 杭州电子科技大学 | Finger vein identification method based on multi-semantic feature fusion network |
| CN113298055A (en)* | 2021-07-27 | 2021-08-24 | 深兰盛视科技(苏州)有限公司 | Vein identification method, vein identification device, vein identification equipment and computer readable storage medium |
| CN113920088A (en)* | 2021-10-11 | 2022-01-11 | 浙江康源医疗器械有限公司 | Deep learning-based radius density detection method, system and device |
| CN114863499A (en)* | 2022-06-30 | 2022-08-05 | 广州脉泽科技有限公司 | Finger vein and palm vein identification method based on federal learning |
| CN115937911A (en)* | 2022-12-02 | 2023-04-07 | 五八同城信息技术有限公司 | Image recognition method, device, electronic device and storage medium |
| CN116844193A (en)* | 2023-07-26 | 2023-10-03 | 山东建筑大学 | Cross-area finger vein recognition method and system based on connectivity keeping matching |
| WO2024037053A1 (en)* | 2022-08-18 | 2024-02-22 | 荣耀终端有限公司 | Fingerprint recognition method and apparatus |
| CN118799810A (en)* | 2024-08-05 | 2024-10-18 | 矿冶科技集团有限公司 | A method and device for thickening machine mud layer height image recognition based on regional attention mechanism |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107609497A (en)* | 2017-08-31 | 2018-01-19 | 武汉世纪金桥安全技术有限公司 | The real-time video face identification method and system of view-based access control model tracking technique |
| CN108009520A (en)* | 2017-12-21 | 2018-05-08 | 东南大学 | A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net |
| CN108446729A (en)* | 2018-03-13 | 2018-08-24 | 天津工业大学 | Egg embryo classification method based on convolutional neural networks |
| CN109492556A (en)* | 2018-10-28 | 2019-03-19 | 北京化工大学 | Synthetic aperture radar target identification method towards the study of small sample residual error |
| CN109815869A (en)* | 2019-01-16 | 2019-05-28 | 浙江理工大学 | A Finger Vein Recognition Method Based on FCN Fully Convolutional Network |
| CN109902732A (en)* | 2019-02-22 | 2019-06-18 | 哈尔滨工业大学(深圳) | Vehicle automatic classification method and related device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107609497A (en)* | 2017-08-31 | 2018-01-19 | 武汉世纪金桥安全技术有限公司 | The real-time video face identification method and system of view-based access control model tracking technique |
| CN108009520A (en)* | 2017-12-21 | 2018-05-08 | 东南大学 | A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net |
| CN108446729A (en)* | 2018-03-13 | 2018-08-24 | 天津工业大学 | Egg embryo classification method based on convolutional neural networks |
| CN109492556A (en)* | 2018-10-28 | 2019-03-19 | 北京化工大学 | Synthetic aperture radar target identification method towards the study of small sample residual error |
| CN109815869A (en)* | 2019-01-16 | 2019-05-28 | 浙江理工大学 | A Finger Vein Recognition Method Based on FCN Fully Convolutional Network |
| CN109902732A (en)* | 2019-02-22 | 2019-06-18 | 哈尔滨工业大学(深圳) | Vehicle automatic classification method and related device |
| Title |
|---|
| 王俊茜: ""基于多任务联合监督学习的行人再识别研究"", 《中国优秀硕士论文全文数据库信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111368747A (en)* | 2020-03-06 | 2020-07-03 | 上海掌腾信息科技有限公司 | System and method for realizing palm vein characteristic correction processing based on TOF technology |
| CN111639558B (en)* | 2020-05-15 | 2023-06-20 | 圣点世纪科技股份有限公司 | Finger vein authentication method based on ArcFace Loss and improved residual error network |
| CN111639558A (en)* | 2020-05-15 | 2020-09-08 | 圣点世纪科技股份有限公司 | Finger vein identity verification method based on ArcFace Loss and improved residual error network |
| CN111950454A (en)* | 2020-08-12 | 2020-11-17 | 辽宁工程技术大学 | A Finger Vein Recognition Method Based on Bidirectional Feature Extraction |
| CN111950454B (en)* | 2020-08-12 | 2024-04-02 | 辽宁工程技术大学 | Finger vein recognition method based on bidirectional feature extraction |
| CN112580590A (en)* | 2020-12-29 | 2021-03-30 | 杭州电子科技大学 | Finger vein identification method based on multi-semantic feature fusion network |
| CN112580590B (en)* | 2020-12-29 | 2024-04-05 | 杭州电子科技大学 | Finger vein recognition method based on multi-semantic feature fusion network |
| CN113298055A (en)* | 2021-07-27 | 2021-08-24 | 深兰盛视科技(苏州)有限公司 | Vein identification method, vein identification device, vein identification equipment and computer readable storage medium |
| CN113920088A (en)* | 2021-10-11 | 2022-01-11 | 浙江康源医疗器械有限公司 | Deep learning-based radius density detection method, system and device |
| CN113920088B (en)* | 2021-10-11 | 2024-08-06 | 浙江康源医疗器械有限公司 | Radius density detection method, system and device based on deep learning |
| CN114863499B (en)* | 2022-06-30 | 2022-12-13 | 广州脉泽科技有限公司 | Finger vein and palm vein identification method based on federal learning |
| CN114863499A (en)* | 2022-06-30 | 2022-08-05 | 广州脉泽科技有限公司 | Finger vein and palm vein identification method based on federal learning |
| WO2024037053A1 (en)* | 2022-08-18 | 2024-02-22 | 荣耀终端有限公司 | Fingerprint recognition method and apparatus |
| CN115937911A (en)* | 2022-12-02 | 2023-04-07 | 五八同城信息技术有限公司 | Image recognition method, device, electronic device and storage medium |
| CN116844193A (en)* | 2023-07-26 | 2023-10-03 | 山东建筑大学 | Cross-area finger vein recognition method and system based on connectivity keeping matching |
| CN118799810A (en)* | 2024-08-05 | 2024-10-18 | 矿冶科技集团有限公司 | A method and device for thickening machine mud layer height image recognition based on regional attention mechanism |
| Publication | Publication Date | Title |
|---|---|---|
| CN110555380A (en) | Finger vein identification method based on Center Loss function | |
| CN112597812B (en) | A finger vein recognition method and system based on convolutional neural network and SIFT algorithm | |
| CN108009520B (en) | Finger vein recognition method and system based on convolutional variational autoencoder network | |
| Kang et al. | From noise to feature: Exploiting intensity distribution as a novel soft biometric trait for finger vein recognition | |
| Zhang et al. | Fingerprint classification based on extraction and analysis of singularities and pseudo ridges | |
| CN100395770C (en) | A Hand Feature Fusion Authentication Method Based on Feature Relationship Measurement | |
| CN100492400C (en) | Finger Vein Feature Extraction and Matching Recognition Method | |
| CN110543822A (en) | A Finger Vein Recognition Method Based on Convolutional Neural Network and Supervised Discrete Hash Algorithm | |
| CN108009472B (en) | Finger back joint print recognition method based on convolutional neural network and Bayes classifier | |
| CN114973307B (en) | Finger vein recognition method and system for generating antagonism and cosine ternary loss function | |
| CN111126240A (en) | A three-channel feature fusion face recognition method | |
| CN104123547B (en) | Identification method based on improved directional filtering and flexible matching | |
| Zhong et al. | Towards application of dorsal hand vein recognition under uncontrolled environment based on biometric graph matching | |
| CN110147769B (en) | A finger vein image matching method | |
| Podder et al. | An efficient iris segmentation model based on eyelids and eyelashes detection in iris recognition system | |
| CN107122710B (en) | A Finger Vein Feature Extraction Method Based on Scattering Convolutional Networks | |
| Khodadoust et al. | Design and implementation of a multibiometric system based on hand’s traits | |
| KR20230026735A (en) | Parallel subsampling structured cnn based finger-vein recognition method | |
| Khoirunnisaa et al. | The biometrics system based on iris image processing: a review | |
| Sun et al. | Research on palm vein recognition algorithm based on improved convolutional neural network | |
| Dehghani et al. | Retinal identification based on rotation invariant moments | |
| Thenmozhi et al. | Comparative analysis of finger vein pattern feature extraction techniques: An overview | |
| Chihaoui et al. | Personal verification system based on retina and SURF descriptors | |
| Fei et al. | Biometric Identification Based on PCA for Palmprint Feature Extraction | |
| Spasova et al. | An Algorithm for Detecting the Location and Parameters of the Iris in the Human Eye |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication | Application publication date:20191210 |