Movatterモバイル変換


[0]ホーム

URL:


CN113673467A - A vehicle color recognition method under white light conditions - Google Patents

A vehicle color recognition method under white light conditions
Download PDF

Info

Publication number
CN113673467A
CN113673467ACN202111001743.7ACN202111001743ACN113673467ACN 113673467 ACN113673467 ACN 113673467ACN 202111001743 ACN202111001743 ACN 202111001743ACN 113673467 ACN113673467 ACN 113673467A
Authority
CN
China
Prior art keywords
vehicle
color
image
module
white light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111001743.7A
Other languages
Chinese (zh)
Other versions
CN113673467B (en
Inventor
谢建
何坤
凌冠昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN YANGTZE COMMUNICATIONS INDUSTRY GROUP CO LTD
Wuhan Yangtze Communications Zhilian Technology Co ltd
Original Assignee
WUHAN YANGTZE COMMUNICATIONS INDUSTRY GROUP CO LTD
Wuhan Yangtze Communications Zhilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN YANGTZE COMMUNICATIONS INDUSTRY GROUP CO LTD, Wuhan Yangtze Communications Zhilian Technology Co ltdfiledCriticalWUHAN YANGTZE COMMUNICATIONS INDUSTRY GROUP CO LTD
Priority to CN202111001743.7ApriorityCriticalpatent/CN113673467B/en
Publication of CN113673467ApublicationCriticalpatent/CN113673467A/en
Application grantedgrantedCritical
Publication of CN113673467BpublicationCriticalpatent/CN113673467B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种白光条件下的车辆颜色识别方法。本发明主要包括两个部分:(1)提出了一种白光条件下车辆颜色矫正方法,该方法结合了RetineX和darkChannel方法的优点,可以有效缓解因低照度、逆光等现象造成的影响,减小在这些不利环境因素导致的识别率降低的问题。(2)设计了一种基于空间注意力机制的网络模型,该模型可以让网络模型自动聚焦到车辆颜色的有效区域,提高车身颜色有效区域的权重,降低车辆颜色无效区域的权重,然后采用深度神经网络准确识别出车辆颜色。

Figure 202111001743

The invention discloses a vehicle color recognition method under the condition of white light. The present invention mainly includes two parts: (1) A vehicle color correction method under white light conditions is proposed. The method combines the advantages of the RetineX and darkChannel methods, and can effectively alleviate the effects caused by low illumination, backlight and other phenomena, reduce the The problem of reduced recognition rate caused by these unfavorable environmental factors. (2) A network model based on spatial attention mechanism is designed, which can make the network model automatically focus on the effective area of vehicle color, increase the weight of the effective area of the vehicle color, reduce the weight of the invalid area of the vehicle color, and then use the depth The neural network accurately identified the vehicle color.

Figure 202111001743

Description

Vehicle color identification method under white light condition
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a vehicle color identification method under a white light condition.
Background
In recent years, with the rapid development of computer and internet technologies, the number of various automobiles is rapidly increased, and various types of information including information related to traffic are in an explosive growth situation. In order to manage the information more safely and efficiently, the intelligent transportation system is developed. The intelligent transportation system can show powerful effect in the aspect of charge bayonet, parking area, criminal tracking etc.. The vehicle information in the video image comprises a license plate number, a vehicle color, a vehicle model and the like, wherein the vehicle color identification plays a significant role in vehicle road monitoring and is an indispensable part in the vehicle information. When the license plate number cannot be identified or distinguished, other vehicle information such as vehicle color and the like becomes a basis for distinguishing the vehicles. For example, in illegal fake-licensed vehicle identification, the same-license vehicles must be distinguished by non-license information, and vehicle color information is preferred.
The current vehicle color identification methods mainly comprise the following methods:
(1) the direct classification method [1] firstly detects the position of the vehicle, then directly extracts the color characteristics of the vehicle region, and finally adopts a classifier for classification. The method cannot find out the area to be identified of the vehicle, so that windows, tires, backgrounds and the like can have great influence on results.
(2) Method for vehicle color zone selection based on image processing [2] [3 ]. The method comprises the steps of firstly positioning the position and the size of a license plate, then initially selecting a vehicle color identification region according to priori knowledge, then selecting the vehicle color identification region according to information such as gradient and brightness, and finally carrying out vehicle color identification in the region. The method needs a large amount of manual experience, is not strong in robustness and is greatly influenced by the environment.
(3) Based on a method of segmenting vehicle color regions [4 ]. And marking a large amount of vehicle color effective area data, detecting an effective area for vehicle color identification by adopting a semantic segmentation method, and identifying the vehicle color in the area. The method has huge workload of data annotation, semantic segmentation is needed for identification, and the algorithm has huge calculation amount.
(4) And (5) a gray level elimination method [5], wherein interference areas such as window glass, vehicle shadow and the like are eliminated by a method of the difference between the maximum value and the minimum value of the RGB space, and color vehicles and black and white vehicles are distinguished by a pixel number ratio. The method is based on image processing, is easily influenced by ambient light, and has low robustness
(5) Based on the method [6] of scene classification, the vehicle to be recognized is firstly subjected to scene classification, and then a color classification model carried out in the field is called to classify the vehicle color. The method requires a large amount of data and cannot find out the area to be identified of the vehicle.
In summary, the following problems in the prior art need to be solved: (1) for the color of the vehicle, only a part of the regions can be actually used for the color recognition of the vehicle based on the intuition of human eyes, for example, windows, lamps, tires, etc. do not belong to the regions for the color judgment of the vehicle, and need to be excluded, and the effective regions of the color of the vehicle are generally the regions of a hood, a door, etc. of the vehicle. If the algorithm can not accurately and effectively find the effective area, the recognition rate is reduced.
(2) Due to the influence of factors such as different illumination, different weather and different angles in a natural scene, even if the vehicle is of the same color, the image often shows phenomena such as color cast and light reflection, and the algorithm recognition rate is reduced.
The phase references are as follows:
[1] vehicle color identification method and apparatus, publication No. CN 107844745 a.
[2] A method and a device for positioning a vehicle body color identification area are disclosed in the publication number CN 106529553B.
[3] A method for recognizing the color of a car body in a monitoring scene is disclosed as CN 109741406A.
[4] Automatic recognition method of vehicle color, electronic device, computer apparatus and medium, publication No. CN 111325211 a.
[5] A vehicle body color identification method is disclosed in publication number CN 105005766A.
[6] A vehicle color classification model training method, a device and a vehicle color identification method are disclosed in the publication number CN 110348505A.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for correcting the color of a vehicle under the white light condition, which combines the advantages of RetineX and dark channel methods, can effectively relieve the influence caused by the phenomena of low illumination, backlight and the like, and reduces the problem of low recognition rate caused by adverse environmental factors. Meanwhile, a network model based on a space attention mechanism is designed, the network model can automatically focus on an effective area of vehicle color, the weight of the effective area of the vehicle body color is improved, the weight of an ineffective area of the vehicle color is reduced, and then the vehicle color is accurately identified by adopting a deep neural network.
In order to achieve the purpose, the invention provides the technical scheme that: a vehicle color recognition method under white light conditions comprises the following steps:
step 1, collecting a large amount of vehicle image data collected under a white light condition, and marking the color of each vehicle;
step 2, designing a vehicle color identification network model;
the vehicle color recognition model comprises a backbone module, an attention module, a BAP module, an FC module and a softmax module;
step 3, providing the collected vehicle data to a vehicle color recognition network model for repeated iterative training;
step 4, detecting vehicle position information in the image by using a vehicle detector;
step 5, cutting the vehicle image from the original image according to the detected vehicle position;
step 6, performing color correction on the cut vehicle image;
and 7, sending the vehicle image subjected to color correction to a vehicle color identification network, and predicting the vehicle color category.
Further, the backbone module in the vehicle color identification model in step 2 adopts the first 4 residual structures of mobilenetV 2; the attention module obtains a spatial weight mask from the original features extracted by the backbone module and is used for weighting different positions; the BAP module weights the original features extracted by the backbone module by the weight mask obtained by the attention module to obtain weighted new features; the FC module is used for matching the new feature dimension with the output category number; the role of the softmax module is to output the probability for each category.
Further, the specific processing procedure of the Attention module is as follows;
performing convolution operation on feature _ raw extracted by a backbone module by adopting M convolution kernels of 1x1 to obtain Mx14x14 mask graphs, and normalizing each mask to be 0-1;
wherein the normalization formula is:
Figure BDA0003235669870000031
in the formula, maskSrcjRepresents the original jth mask, mask DstjRepresents the normalized jth mask,
Figure BDA0003235669870000032
respectively representing the ith point of the original jth mask and the ith point of the normalized jth mask;
min(maskSrcj),max(maskSrcj) Respectively representing the minimum value and the maximum value of the original jth mask.
Further, the BAP module performs a dot product operation on the M mask masks in the attribute module and the feature _ raw extracted by the background module to obtain 64xMx14x14 feature maps, and then performs an average pooling operation on each feature map to obtain 64xMx1x1 feature maps.
Further, the Softmax module performs Softmax operation on the N prediction results output by the FC module, so as to predict the probability of each class, where the Softmax formula is:
Figure BDA0003235669870000033
wherein z isiIndicating a prediction value, p, of the i-th classiRepresenting the prediction probability of the i-th class.
Further, the specific implementation process of step 6 is as follows;
(1) cutting the detected image block of the vehicle, and marking as P;
(2) calculating the maximum value, the minimum value, Vmax and Vmin of P;
(3) linearly stretching P obtained in step (2) to obtain P1 ═ 255 × (P-Vmin)/(Vmax-Vmin);
(4) transforming the P1 obtained in the step (3) into a logarithmic domain to obtain P2 ═ log (P1+1)/log (256);
(5) calculating a dark primary color channel of P2 according to the P2 obtained in the step (4) to obtain P3;
(6) calculating a brightness map of each color channel of P2 according to the P2 obtained in the step (4) to obtain P4;
(7) calculating a color corrected image according to P3 and P4 obtained in step (5) and step (6) to obtain P5 ═ P2-P3)/(P4-P3;
(8) and (4) restoring the color correction map obtained in the step (7) to a normal image to obtain P6-P5-255.
Further, the way of calculating the dark primary channel of P2 in step (5) is as follows:
Figure BDA0003235669870000041
wherein JdarkCalled the dark primary channel of the image, omega (x) denotes the local block area centered on x, c denotes one of the three channels r, g, b, JcC channel which is an input image; x is a pixel point on the log domain image P2, and y is a pixel point in the local block region Ω (x).
Further, the calculation method of the luminance map of each color channel of P2 in step (6) is as follows:
Figure BDA0003235669870000042
wherein
Figure BDA0003235669870000043
Luminance map called c-channel of image, Ω (x) denotes a local block area centered on x, c denotes one of r, g, b channels, JcC channel which is an input image; x is a pixel point on the log domain image P2, and y is a local block regionOne pixel in Ω (x).
Compared with the prior art, the invention has the advantages and beneficial effects that: (1) a method for correcting the color of car under white light condition is disclosed, which can reduce the influence of low-light and back-light phenomena on the recognition rate. (2) A network model based on a space attention mechanism is designed, the model can automatically focus on the effective area of the color of the car body, and the area to be identified does not need to be positioned by means of manual experience.
Drawings
FIG. 1 is an overall flow chart of the method of the present invention.
FIG. 2 is a flow chart of the color correction of the present invention.
FIG. 3 is a diagram of the color correction effect of the present invention.
FIG. 4 is a diagram of a vehicle color classification network model according to the present invention.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
As shown in fig. 1, the present invention provides a method for identifying vehicle color under white light condition, comprising the following steps:
step 1, collecting a large amount of vehicle image data collected under the white light condition, and marking the color of each vehicle.
In the step 1, vehicle image data under a large amount of white light conditions are collected through cameras of a bayonet and a parking lot, then the position information of a vehicle in the image is detected through a vehicle detector, wherein the position information comprises (x1, y1, x2 and y2), wherein x1, y1, x2 and y2 respectively represent horizontal and vertical coordinates of the upper left corner and the lower right corner of the vehicle, then a vehicle image block is cut out from the image data according to the position information of the vehicle, and finally the vehicle color in the vehicle image block is marked in a manual mode.
And 2, designing a vehicle color identification network model.
The vehicle color recognition model designed in step 2 is shown in fig. 4, and includes modules such as a backbone, an attribute, a BAP, an FC, and a softmax. Wherein the backbone adopts the first 4 residual structures of mobilenetV 2; the attention is to obtain a spatial weight mask from the original features extracted from the backbone, and is used for weighting different positions; BAP is to weight the original features extracted from the backbone by the weight mask obtained by the attention to obtain new weighted features; the FC layer is used for matching the new feature dimension with the output class number; the role of softmax is to output the probability for each category.
The vehicle color identification network of the invention adds a space attention mechanism on the basis of the traditional classification network model, and the space attention module is used for automatically distributing different weights to different areas in a vehicle image, increasing the weight of a vehicle color effective area and reducing the weight of a vehicle ineffective area so as to improve the classification performance of the network model. The whole network structure is formed by combining the modules of backbone, attack, BAP, FC, softmax and the like, and the network model structure is shown in FIG. 4.
A Backbone module: the invention is formed by stacking the first 4 residual error structures of mobilenetV2, and the input is as follows: 3x224x224, with the output: 64x14x14, and the output is denoted feature _ raw.
An Attention module: according to the method, M convolution kernels of 1x1 are adopted to perform convolution operation on feature _ raw extracted from a backbone to obtain Mx14x14 mask graphs, each mask is normalized to be between 0 and 1, and M is 16 by default. The normalized formula is:
Figure BDA0003235669870000051
wherein the mask srcjRepresents the original jth mask, mask DstjRepresents the normalized jth mask,
Figure BDA0003235669870000052
respectively representing the ith point of the original jth mask and the ith point of the normalized jth mask. min (mask src)j),max(maskSrcj) Respectively representing the minimum value and the maximum value of the original jth mask.
BAP module (biliiner attachment position): the method comprises the steps of performing dot product operation on M mask graphs in the attribute and feature _ raw extracted from a backbone to obtain 64xMx14x14 feature graphs, and performing average pooling operation on each feature graph to obtain 64xMx1x1 feature graphs which are marked as feature _ matrix.
An FC module: the invention makes the feature _ matrix extracted by BAP as N-type FC full connection for outputting the prediction result of each type.
The invention relates to a Softmax module, which performs Softmax operation on N prediction results output by FC and aims to predict the probability of each class. Wherein softmax is the formula:
Figure BDA0003235669870000053
wherein z isiIndicating a prediction value, p, of the i-th classiRepresenting the prediction probability of the i-th class.
And 3, providing the collected vehicle data to a vehicle color recognition network model for repeated iterative training.
The parameters of the network training in the step 3 are as follows: the lr is 0.001, the moment is 0.9, the weight _ decay is 1e-5, the epoch is 50, the batch size is 64, the network optimization adopts an SGD gradient descent mode, the learning rate adopts fixed compensation attenuation, and every 2 epochs are multiplied by 0.9; wherein lr: learning rate, moment: momentum, weight _ decay: attenuation coefficient, epoch: training the number of iterations of the lump, batch size: the number of samples of one iteration.
And 4, detecting the vehicle position information in the image by adopting a vehicle detector.
The vehicle detector in the step 4 is obtained by adopting yolov5 framework and COCO + VOC combined data set training.
And 5, cutting the vehicle image from the original image according to the detected vehicle position.
And 6, performing color correction on the cut vehicle image.
The flow chart of the color correction in step 6 is shown in fig. 2, and includes the steps of: linear stretching, logarithmic domain transformation, extraction of a dark primary color channel, extraction of a brightness map of each color channel, correction of each color channel and the like, and the original vehicle image color is corrected through the steps. The invention combines the advantages of RetineX and dark channel, and finds that the effect has certain superiority to the vehicle color correction in a hostile environment through experiments, the flow chart is shown as figure 2, the effect is shown as figure 3, the left side is an original graph, the right side is a graph after color correction, and the method comprises the following steps:
(1) and cutting the detected image block of the vehicle, and recording as P.
(2) And calculating the maximum value, the minimum value, Vmax and Vmin of the P.
(3) Linear stretching of P obtained in step (2) gave P1 ═ 255 × (P-Vmin)/(Vmax-Vmin).
(4) And (3) transforming the P1 obtained in the step (3) into a logarithmic domain to obtain P2 which is log (P1+1)/log (256).
(5) Calculating a dark primary color channel of P2 according to the P2 obtained in the step (4) to obtain P3, wherein the dark primary color channel is obtained by the following process: extracting the minimum component in three channels of R, G and B of each pixel point of the original image to obtain a single-channel dark primary color image, and then performing local area minimum filtering, namely gray corrosion operation, on the dark primary color image by adopting a Marcel van Herk rapid algorithm to obtain a dark primary color channel.
(6) Calculating the brightness map of each color channel of P2 according to the P2 obtained in the step (4) to obtain P4, wherein the obtaining process of the brightness map of each color channel comprises the following steps: decomposing the original image in an RGB space to obtain three channel images of R, G and B, and then respectively carrying out local area maximum filtering, namely gray expansion operation on the three channel images of R, G and B by adopting a Marcel van Herk fast algorithm to obtain brightness images of all channels.
(7) The color corrected image is calculated from P3 and P4 obtained in step (5) and step (6), and P5 is (P2-P3)/(P4-P3).
(8) And (4) restoring the color correction map obtained in the step (7) to a normal image to obtain P6-P5-255.
The way of calculating the dark primary channel of P2 in step (5) is:
Figure BDA0003235669870000061
wherein JdarkCalled the dark primary channel of the image, omega (x) denotes the local block area centered on x, c denotes one of the three channels r, g, b, JcIs the c channel of the input image.
The calculation method for calculating the luminance map of each color channel of P2 in step (6) is as follows:
Figure BDA0003235669870000071
wherein
Figure BDA0003235669870000072
Luminance map called c-channel of image, Ω (x) denotes a local block area centered on x, c denotes one of r, g, b channels, JcIs the c channel of the input image.
And 7, sending the vehicle image blocks subjected to color correction to a vehicle color identification network, and predicting the vehicle color types.
The specific embodiments described herein are merely illustrative of the present invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the scope of the invention as defined in the appending claims.

Claims (8)

Translated fromChinese
1.一种白光条件下的车辆颜色识别方法,其特征在于,包括如下步骤:1. a vehicle color identification method under a white light condition, is characterized in that, comprises the steps:步骤1,收集大量白光条件下采集的车辆图像数据,并对每辆车进行车辆颜色标记;Step 1, collect a large number of vehicle image data collected under white light conditions, and mark each vehicle with vehicle color;步骤2,设计车辆颜色识别网络模型;Step 2, design a vehicle color recognition network model;所述车辆颜色识别模型包含backbone模块,attention模块,BAP模块,FC模块和softmax模块;The vehicle color recognition model includes a backbone module, an attention module, a BAP module, an FC module and a softmax module;步骤3,将收集的车辆数据提供给车辆颜色识别网络模型进行反复迭代训练;Step 3, providing the collected vehicle data to the vehicle color recognition network model for repeated iterative training;步骤4,采用车辆检测器检测图像中的车辆位置信息;Step 4, using a vehicle detector to detect vehicle position information in the image;步骤5,通过检测到的车辆位置将车辆图像从原始图像中裁剪出来;Step 5, crop the vehicle image from the original image by the detected vehicle position;步骤6,将裁剪出来的车辆图像进行颜色矫正;Step 6: Perform color correction on the cropped vehicle image;步骤7,将经过颜色矫正后的车辆图像送入到车辆颜色识别网络,预测车辆颜色类别。Step 7: The color-corrected vehicle image is sent to the vehicle color recognition network to predict the vehicle color category.2.如权利要求1所述的一种白光条件下的车辆颜色识别方法,其特征在于:步骤2所述车辆颜色识别模型中的backbone模块采用的是mobilenetV2的前4个残差结构;attention模块是从backbone模块提取的原始特征得到空间权重mask,用于给不同位置加权;BAP模块是将attention模块得到的权重mask对backbone模块提取的原始特征进行加权,得到加权后的新特征;FC模块的作用是将新特征维度与输出类别数匹配;softmax模块的作用是用于输出每个类别的概率。2. the vehicle color identification method under a kind of white light condition as claimed in claim 1 is characterized in that: what the backbone module in the described vehicle color identification model of step 2 adopts is the first 4 residual structures of mobilenetV2; attention module The spatial weight mask is obtained from the original features extracted by the backbone module, which is used to weight different positions; the BAP module weights the original features extracted by the backbone module with the weight mask obtained by the attention module to obtain weighted new features; The role is to match the new feature dimension with the number of output categories; the role of the softmax module is to output the probability of each category.3.如权利要求2所述的一种白光条件下的车辆颜色识别方法,其特征在于:Attention模块的具体处理过程如下;3. the vehicle color recognition method under a kind of white light condition as claimed in claim 2, is characterized in that: the concrete processing process of Attention module is as follows;采用M个1x1的卷积核将backbone模块提取的feature_raw做卷积操作,得到Mx14x14个mask掩膜图,并将每个mask归一化到0~1之间;Use M 1x1 convolution kernels to perform the convolution operation on the feature_raw extracted by the backbone module to obtain Mx14x14 mask maps, and normalize each mask to be between 0 and 1;其中归一化公式为:
Figure FDA0003235669860000011
The normalization formula is:
Figure FDA0003235669860000011
式中,maskSrcj表示原始第j个mask,maskDstj表示归一化后的第j个mask,
Figure FDA0003235669860000012
分别表示原始第j个mask的第i个点和归一化后的j个mask的第i个点;min(maskSrcj),max(maskSrcj)分别表示原始第j个mask的最小值与最大值。
In the formula, maskSrcj represents the original j-th mask, maskDstj represents the j-th mask after normalization,
Figure FDA0003235669860000012
Represent the i-th point of the original j-th mask and the i-th point of the normalized j-th mask, respectively; min(maskSrcj ), max(maskSrcj ) represent the minimum and maximum values of the original j-th mask, respectively value.
4.如权利要求3所述的一种白光条件下的车辆颜色识别方法,其特征在于:BAP模块是将attention模块中M个mask掩膜图与backbone模块提取的feature_raw做点积操作,得到64xMx14x14个特征图,然后再对每个特征图做平均池化操作,得到64xMx1x1个特征图。4. the vehicle color recognition method under a kind of white light condition as claimed in claim 3, it is characterized in that: BAP module is to do dot product operation with the feature_raw extracted by M mask image and backbone module in the attention module, obtain 64xMx14x14 feature maps, and then perform an average pooling operation on each feature map to obtain 64xMx1x1 feature maps.5.如权利要求2所述的一种白光条件下的车辆颜色识别方法,其特征在于:Softmax模块是将FC模块输出的N个预测结果做softmax操作,目的是预测每一类的概率,其中softmax公式为:
Figure FDA0003235669860000021
其中zi表示第i类的预测值,pi表示第i类的预测概率。
5. the vehicle color recognition method under a kind of white light condition as claimed in claim 2, it is characterized in that: Softmax module is to do softmax operation with N prediction results of FC module output, the purpose is to predict the probability of each class, wherein The softmax formula is:
Figure FDA0003235669860000021
wherezi represents the predicted value of thei -th class, and pi represents the predicted probability of the i-th class.
6.如权利要求1所述的一种白光条件下的车辆颜色识别方法,其特征在于:步骤6的具体实现过程如下;6. the vehicle color identification method under a kind of white light condition as claimed in claim 1 is characterized in that: the concrete realization process of step 6 is as follows;(1),裁剪检测得到的车辆图像块,记为P;(1), crop the detected vehicle image block, denoted as P;(2),计算P的最大、最小值,Vmax,Vmin;(2), calculate the maximum and minimum values of P, Vmax, Vmin;(3),将步骤(2)中得到的P进行线性拉伸,得到P1=255*(P-Vmin)/(Vmax-Vmin);(3), linearly stretch the P obtained in step (2) to obtain P1=255*(P-Vmin)/(Vmax-Vmin);(4),将步骤(3)得到的P1进行变换到对数域,得到P2=log(P1+1)/log(256);(4), transform the P1 obtained in step (3) into the logarithmic domain to obtain P2=log(P1+1)/log(256);(5),根据步骤(4)得到的P2,计算P2的暗原色通道,得到P3;(5), according to the P2 obtained in step (4), calculate the dark primary color channel of P2 to obtain P3;(6),根据步骤(4)得到的P2,计算P2各颜色通道的亮度图,得到P4;(6), according to P2 obtained in step (4), calculate the luminance map of each color channel of P2, obtain P4;(7),根据步骤(5),步骤(6)得到的P3,P4计算颜色矫正图像,得到P5=(P2-P3)/(P4-P3);(7), calculate the color correction image according to P3 and P4 obtained in step (5) and step (6), and obtain P5=(P2-P3)/(P4-P3);(8),将步骤(7)得到的颜色矫正图恢复到正常图像,得到P6=P5*255。(8), restore the color correction image obtained in step (7) to a normal image to obtain P6=P5*255.7.如权利要求6所述的一种白光条件下的车辆颜色识别方法,其特征在于:步骤(5)中计算P2的暗原色通道的方式为:
Figure FDA0003235669860000022
其中Jdark被称为图像的暗原色通道,Ω(x)表示以x为中心的局部块区域,c表示为r,g,b三通道中的一个颜色通道,Jc为输入图像的c通道;x是指对数域图像P2上的一个像素点,y是局部块区域Ω(x)中的一个像素点。
7. the vehicle color identification method under a kind of white light condition as claimed in claim 6, is characterized in that: in step (5), the mode that calculates the dark primary color channel of P2 is:
Figure FDA0003235669860000022
Among them, Jdark is called the dark primary color channel of the image, Ω(x) represents the local block area centered on x, c represents a color channel in the three channels of r, g, and b, and Jc is the c channel of the input image ; x refers to a pixel on the logarithmic domain image P2, and y is a pixel in the local block region Ω(x).
8.如权利要求6所述的一种白光条件下的车辆颜色识别方法,其特征在于:步骤(6)中计算P2的各颜色通道的亮度图的计算方式为:
Figure FDA0003235669860000023
其中
Figure FDA0003235669860000024
被称为图像c通道的亮度图,Ω(x)表示以x为中心的局部块区域,c表示为r,g,b三通道中的一个颜色通道,Jc为输入图像的c通道;x是指对数域图像P2上的一个像素点,y是局部块区域Ω(x)中的一个像素点。
8. the vehicle color identification method under a kind of white light condition as claimed in claim 6, is characterized in that: in step (6), the calculation method of the luminance map of each color channel of calculating P2 is:
Figure FDA0003235669860000023
in
Figure FDA0003235669860000024
It is called the brightness map of the c channel of the image, Ω(x) represents the local block area centered on x, c represents a color channel in the three channels of r, g, and b, and Jc is the c channel of the input image; x refers to a pixel on the logarithmic domain image P2, and y is a pixel in the local block region Ω(x).
CN202111001743.7A2021-08-302021-08-30 A vehicle color recognition method under white light conditionsActiveCN113673467B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111001743.7ACN113673467B (en)2021-08-302021-08-30 A vehicle color recognition method under white light conditions

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111001743.7ACN113673467B (en)2021-08-302021-08-30 A vehicle color recognition method under white light conditions

Publications (2)

Publication NumberPublication Date
CN113673467Atrue CN113673467A (en)2021-11-19
CN113673467B CN113673467B (en)2025-08-26

Family

ID=78547359

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111001743.7AActiveCN113673467B (en)2021-08-302021-08-30 A vehicle color recognition method under white light conditions

Country Status (1)

CountryLink
CN (1)CN113673467B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116704052A (en)*2023-08-042023-09-05匀熵智能科技(无锡)有限公司Light vehicle color detection method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107729801A (en)*2017-07-112018-02-23银江股份有限公司A kind of vehicle color identifying system based on multitask depth convolutional neural networks
US20190278994A1 (en)*2018-03-082019-09-12Capital One Services, LlcPhotograph driven vehicle identification engine
CN110458077A (en)*2019-08-052019-11-15高新兴科技集团股份有限公司A kind of vehicle color identification method and system
CN110555464A (en)*2019-08-062019-12-10高新兴科技集团股份有限公司Vehicle color identification method based on deep learning model
CN111914911A (en)*2020-07-162020-11-10桂林电子科技大学Vehicle re-identification method based on improved depth relative distance learning model
CN113191218A (en)*2021-04-132021-07-30南京信息工程大学Vehicle type recognition method based on bilinear attention collection and convolution long-term and short-term memory

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107729801A (en)*2017-07-112018-02-23银江股份有限公司A kind of vehicle color identifying system based on multitask depth convolutional neural networks
US20190278994A1 (en)*2018-03-082019-09-12Capital One Services, LlcPhotograph driven vehicle identification engine
CN110458077A (en)*2019-08-052019-11-15高新兴科技集团股份有限公司A kind of vehicle color identification method and system
CN110555464A (en)*2019-08-062019-12-10高新兴科技集团股份有限公司Vehicle color identification method based on deep learning model
CN111914911A (en)*2020-07-162020-11-10桂林电子科技大学Vehicle re-identification method based on improved depth relative distance learning model
CN113191218A (en)*2021-04-132021-07-30南京信息工程大学Vehicle type recognition method based on bilinear attention collection and convolution long-term and short-term memory

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
刘振宇;江海蓉;徐鹤文;: "极端天气条件下低质图像增强算法研究", 计算机工程与应用, no. 08, 25 March 2016 (2016-03-25), pages 193 - 198*
孔奥阳: "基于Retinex阈值分割算法的图像增强研究", 《硕士电子期刊(信息科技)》, 15 February 2021 (2021-02-15), pages 1 - 56*
李社蕾等: "基于暗原色先验模型的水下图像增强算法", 《计算机技术与发展》, vol. 28, no. 10, 31 October 2018 (2018-10-31), pages 70 - 73*
王红茹;张弓;卢道华;王佳;: "基于背景光估计与颜色修正的水下图像增强", 计算机工程, no. 10, 30 October 2020 (2020-10-30), pages 253 - 258*
袁公萍;汤一平;韩旺明;陈麒;: "基于深度卷积神经网络的车型识别方法", 浙江大学学报(工学版), no. 04, 5 March 2018 (2018-03-05), pages 87 - 95*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116704052A (en)*2023-08-042023-09-05匀熵智能科技(无锡)有限公司Light vehicle color detection method, device and storage medium

Also Published As

Publication numberPublication date
CN113673467B (en)2025-08-26

Similar Documents

PublicationPublication DateTitle
CN115311241B (en)Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN111914634B (en)Automatic detection method and system for well lid class resisting complex scene interference
WO2024051296A1 (en)Method and apparatus for obstacle detection in complex weather
CN106169081A (en)A kind of image classification based on different illumination and processing method
CN113569882A (en)Knowledge distillation-based rapid pedestrian detection method
CN111611907B (en) An image-enhanced infrared target detection method
CN115131325A (en)Breaker fault operation and maintenance monitoring method and system based on image recognition and analysis
CN112561899A (en)Electric power inspection image identification method
CN112200746A (en) Dehazing method and device for foggy traffic scene images
CN114299438B (en)Tunnel parking event detection method integrating traditional parking detection and neural network
CN114332655A (en) A vehicle adaptive fusion detection method and system
CN110060221A (en)A kind of bridge vehicle checking method based on unmanned plane image
CN114743126A (en)Lane line sign segmentation method based on graph attention machine mechanism network
CN112102214B (en)Image defogging method based on histogram and neural network
CN116596792A (en)Inland river foggy scene recovery method, system and equipment for intelligent ship
CN116883868A (en) UAV intelligent cruise detection method based on adaptive image defogging
CN112233105A (en)Road crack detection method based on improved FCN
CN113673467B (en) A vehicle color recognition method under white light conditions
CN114359196A (en) Fog detection method and system
CN116385293B (en) Adaptive target detection method in foggy weather based on convolutional neural network
CN118053038A (en)Automatic pavement disease image identification method of C/S and B/S fusion architecture
CN117935202A (en)Low-illumination environment lane line detection method based on deep learning
Pavethra et al.A cross layer graphical neural network based convolutional neural network framework for image dehazing
CN113689399B (en) A remote sensing image processing method and system for power grid identification
CN116977975A (en)Traffic sign detection method based on deep learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp