Movatterモバイル変換


[0]ホーム

URL:


CN107886033B - Method, device and vehicle for identifying circular traffic lights - Google Patents

Method, device and vehicle for identifying circular traffic lights
Download PDF

Info

Publication number
CN107886033B
CN107886033BCN201610874308.8ACN201610874308ACN107886033BCN 107886033 BCN107886033 BCN 107886033BCN 201610874308 ACN201610874308 ACN 201610874308ACN 107886033 BCN107886033 BCN 107886033B
Authority
CN
China
Prior art keywords
image
circular
area
traffic light
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610874308.8A
Other languages
Chinese (zh)
Other versions
CN107886033A (en
Inventor
高上添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co LtdfiledCriticalBYD Co Ltd
Priority to CN201610874308.8ApriorityCriticalpatent/CN107886033B/en
Publication of CN107886033ApublicationCriticalpatent/CN107886033A/en
Application grantedgrantedCritical
Publication of CN107886033BpublicationCriticalpatent/CN107886033B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本公开提供了一种识别圆形交通灯的方法、装置及车辆。该方法基于3D摄像头采集到的具有深度信息的图像,对圆形交通灯进行识别。由于3D摄像头的深度成像原理不受自然光照影响,所以在识别圆形交通灯的过程中,对于白天、夜晚等不同光照情况下的识别会更加准确。并且,由于深度信息是由3D摄像头直接输出的,无需额外处理,因此在一定程度上可以减少图像处理的复杂度,提升识别效率。此外,3D摄像头对深度信息与颜色信息的输出在时间轴上几乎是一致的,这样使得在结合两者对圆形交通灯进行识别时,识别结果更加准确。无需采集样本以及机器学习,简化了识别圆形交通灯的过程,提高了识别圆形交通灯的效率和准确度。

Figure 201610874308

The present disclosure provides a method, device and vehicle for recognizing a circular traffic light. This method recognizes circular traffic lights based on images with depth information collected by 3D cameras. Since the depth imaging principle of the 3D camera is not affected by natural light, in the process of recognizing circular traffic lights, the recognition under different lighting conditions such as day and night will be more accurate. Moreover, since the depth information is directly output by the 3D camera without additional processing, the complexity of image processing can be reduced to a certain extent and the recognition efficiency can be improved. In addition, the output of the depth information and color information of the 3D camera is almost the same on the time axis, which makes the recognition result more accurate when the two are combined to recognize the circular traffic light. There is no need to collect samples and machine learning, which simplifies the process of identifying circular traffic lights and improves the efficiency and accuracy of identifying circular traffic lights.

Figure 201610874308

Description

Method and device for identifying circular traffic light and vehicle
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for identifying a circular traffic light, and a vehicle.
Background
With the continuous development of scientific technology, the intellectualization of vehicle driving is gradually changed into reality from imagination, and great help is provided for the realization of motor vehicle driving and even unmanned driving based on the detection, identification and tracking algorithm of a traffic signal lamp.
The current traffic signal lamp identification method adopts a single 2D camera to acquire images, and then utilizes image processing technologies such as pattern matching and machine learning to realize identification. In order to obtain an accurate recognition result, besides the requirement on image quality is extremely high, the requirement on the complexity of an algorithm is also very strict, and the recognition result is not satisfactory, mainly because the image quality of an image acquired by a 2D camera is limited to a certain extent, and the image quality is greatly influenced by external interference, which directly influences the recognition result.
Disclosure of Invention
The purpose of the present disclosure is to provide a method, an apparatus and a vehicle for identifying a circular traffic light, so as to simplify the process of identifying the circular traffic light and improve the efficiency and accuracy of identifying the circular traffic light.
To achieve the above object, the present disclosure provides a method of identifying a circular traffic lamp, the method comprising:
a circular target area with any color characteristic of red, green or yellow is segmented from an image with depth information acquired by a 3D camera;
extracting a target depth image meeting a preset depth threshold range from an image with depth information acquired by a 3D camera;
comparing the circular target area with the target depth image to locate a rectangular background frame from the target depth image;
determining that the circular target area contained in the rectangular background frame is a circular traffic light according to the size relation between the area of the rectangular background frame and the area of the circular target area contained in the rectangular background frame;
determining the type of the circular traffic light in combination with the color characteristics of the circular traffic light.
Optionally, determining that the circular target area included in the rectangular background frame is a circular traffic light according to a size relationship between the area of the rectangular background frame and the area of the circular target area included in the rectangular background frame, includes:
determining the ratio of the area of the rectangular background frame to the area of a circular target area contained in the rectangular background frame;
if the ratio accords with the preset range, the rectangular background frame is a circular traffic light background frame, and a circular target area contained in the rectangular background frame is a circular traffic light.
Optionally, segmenting a circular target region with any color characteristic of red, green or yellow from an image with depth information acquired by the 3D camera, including:
segmenting a target area with any color characteristic of red, green or yellow from an image with depth information acquired by a 3D camera;
filtering out a non-circular area according to the length-width ratio of the target area;
judging whether the filtered Hu characteristics of the target area are matched with the Hu characteristics of the circular traffic light template or not;
and extracting a successfully matched region from the filtered target region.
Optionally, the method further comprises:
carrying out histogram equalization on an image with depth information acquired by a 3D camera;
carrying out noise reduction processing on the image subjected to histogram equalization;
performing morphological processing on the image subjected to noise reduction processing;
the method for segmenting the target area with any color characteristic of red, green or yellow from the image with the depth information acquired by the 3D camera comprises the following steps:
performing color segmentation on the image with the depth information acquired by the 3D camera;
and comparing the image after the morphological processing with the image after the color segmentation to obtain the target area.
Optionally, comparing the circular target area and the target depth image to locate a rectangular background frame from the target depth image, including:
comparing the circular target area with the target depth image to extract a background area containing the circular target area from the target depth image;
and positioning a rectangular background frame from the background area according to the rectangularity of the background area.
The present disclosure also provides an apparatus for recognizing a circular traffic light, the apparatus including:
the circular target area segmentation module is used for segmenting a circular target area with any color characteristic of red, green or yellow from an image with depth information acquired by the 3D camera;
the target depth image extraction module is used for extracting a target depth image meeting a preset depth threshold range from an image with depth information acquired by the 3D camera;
the rectangular background frame positioning module is used for comparing the circular target area with the target depth image so as to position a rectangular background frame from the target depth image;
the round traffic light determining module is used for determining that the round target area contained in the rectangular background frame is a round traffic light according to the size relation between the area of the rectangular background frame and the area of the round target area contained in the rectangular background frame;
a circular traffic light type determination module to determine the type of the circular traffic light in conjunction with color characteristics of the circular traffic light.
Optionally, the circular traffic light determination module comprises:
the area ratio determining submodule is used for determining the ratio of the area of the rectangular background frame to the area of a circular target area contained in the rectangular background frame;
and the round traffic light determination submodule is used for determining that the rectangular background frame is a round traffic light background frame if the ratio accords with the preset range, and the round target area contained in the rectangular background frame is a round traffic light.
Optionally, the circular target region segmentation module includes: the target area determining submodule is used for segmenting a target area which is smaller than a preset threshold and has any color characteristic of red, green or yellow from an image with depth information acquired by the 3D camera;
the filtering submodule is used for filtering out a non-circular area according to the length-width ratio of the target area;
the judging submodule is used for judging whether the filtered Hu characteristics of the target area are matched with the Hu characteristics of the circular traffic light template or not;
and the first extraction submodule is used for extracting the successfully matched region from the filtered target region.
Optionally, the apparatus further comprises: the histogram equalization module is used for performing histogram equalization on the image with the depth information acquired by the 3D camera;
the noise reduction module is used for carrying out noise reduction processing on the image subjected to histogram equalization;
the morphological processing module is used for carrying out morphological processing on the image subjected to the noise reduction processing;
the target area determination submodule includes:
the segmentation submodule is used for carrying out color segmentation on the image with the depth information acquired by the 3D camera;
and the comparison submodule is used for comparing the image after the morphological processing with the image after the color segmentation so as to obtain the target area.
Optionally, the rectangular background frame positioning module includes:
a background region extraction sub-module, configured to compare the circular target region with the target depth image to extract a background region including the circular target region from the target depth image;
and the rectangular background frame positioning sub-module is used for positioning the rectangular background frame from the background area according to the rectangularity of the background area.
The present disclosure also provides a vehicle, comprising:
the 3D camera is used for acquiring an image with depth information; and
an apparatus for identifying a round traffic light is provided according to the present disclosure.
In the present disclosure, a circular traffic light is identified based on an image with depth information acquired by a 3D camera. Because the depth imaging principle of the 3D camera is not influenced by natural illumination, the identification under different illumination conditions such as day and night can be more accurate in the process of identifying the round traffic light. And because the depth information is directly output by the 3D camera, additional processing is not needed, the complexity of image processing can be reduced to a certain extent, and the recognition efficiency is improved. In addition, the output of the depth information and the color information by the 3D camera is almost consistent on the time axis, so that the recognition result is more accurate when the circular traffic light is recognized by combining the two. The method has the advantages that samples do not need to be collected and machine learning is not needed, the process of identifying the round traffic lights is simplified, and the efficiency and the accuracy of identifying the round traffic lights are improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a method of identifying a circular traffic light in accordance with an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating the matching of Hu features according to an exemplary embodiment.
FIG. 3 is a schematic diagram of a circle determined according to the Hu feature matching method.
Fig. 4 is a diagram illustrating extraction of a target depth image according to an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating an apparatus for identifying a circular traffic light in accordance with an exemplary embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In the related art, the circular traffic light is identified based on the image acquired by the 2D camera, and the efficiency and accuracy for identifying the circular traffic light are low due to the fact that the image quality of the image acquired by the 2D camera is limited to a certain extent, the image quality is greatly influenced by external interference and the influence of algorithm complexity. In order to solve the technical problem, the present disclosure provides a method, an apparatus, and a vehicle for identifying a circular traffic light, so as to simplify the process of identifying the circular traffic light and improve the efficiency and accuracy of identifying the circular traffic light. The method, the device and the vehicle for identifying the round traffic light provided by the present disclosure are respectively explained below.
Referring to fig. 1, fig. 1 is a flow chart illustrating a method of identifying a circular traffic lamp according to an exemplary embodiment. As shown in fig. 1, the method comprises the steps of:
step S11: a circular target area with any color characteristic of red, green or yellow is segmented from an image with depth information acquired by a 3D camera;
step S12: extracting a target depth image meeting a preset depth threshold range from an image with depth information acquired by a 3D camera;
step S13: comparing the circular target area with the target depth image to locate a rectangular background frame from the target depth image;
step S14: determining that the circular target area contained in the rectangular background frame is a circular traffic light according to the size relation between the area of the rectangular background frame and the area of the circular target area contained in the rectangular background frame;
step S15: determining the type of the circular traffic light in combination with the color characteristics of the circular traffic light.
The present disclosure proposes to identify a circular traffic light based on an image with depth information acquired by a 3D camera. Because the depth imaging principle of the 3D camera is not influenced by natural illumination, the identification under different illumination conditions such as day and night can be more accurate in the process of identifying the round traffic light. And because the depth information is directly output by the 3D camera, additional processing is not needed, the complexity of image processing can be reduced to a certain extent, and the recognition efficiency is improved. In addition, the output of the depth information and the color information by the 3D camera is almost consistent on the time axis, so that the recognition result is more accurate when the circular traffic light is recognized by combining the two.
In practical application, the 3D camera may be mounted on the body of an automobile, and one possible mounting manner is: the image acquisition device is arranged on the front windshield of the vehicle body and is opposite to the interior rearview mirror. Therefore, in the process of automobile advancing, the color image and the depth image can be collected in real time through the 3D camera, the round traffic light can be identified in real time, reference is provided for a driver to plan a driving route and driving speed, and driving safety of the driver is guaranteed.
Optionally, the upward or downward rotation angle of the 3D camera may be calibrated according to an image acquired by the 3D camera in real time. Therefore, in the process of image processing, the area and the data volume of image processing can be reduced, and the influence caused by a part of other light sources (such as a part of automobile tail lamps and the like) can also be reduced.
In the disclosure, the whole image collected by the 3D camera may be processed, or an area with a position higher than the whole image collected by the 3D camera may be selected for processing (for example, the whole image is higher than the whole image)
Figure DEST_PATH_GDA0001178085750000071
Region) to reduce the area and data amount of image processing.
The processing of the image with the depth information acquired by the 3D camera comprises the processing of color information and the processing of the depth information, and the two processes are relatively independent, so that the execution sequence of the color information and the depth information is not sequential, and the color information and the depth information can be executed sequentially or in parallel.
On the one hand, by performing step S11, color information in an image with depth information acquired by the 3D camera is processed. The method comprises the following steps:
segmenting a target area with any color characteristic of red, green or yellow from an image with depth information acquired by a 3D camera;
filtering out a non-circular area according to the length-width ratio of the target area;
judging whether the filtered Hu characteristics of the target area are matched with the Hu characteristics of the circular traffic light template or not;
and extracting a successfully matched region from the filtered target region.
There are many possible embodiments of obtaining the target region, which are described below.
A first possible implementation of obtaining the target area is: and adopting Lab color space for image segmentation. Firstly, converting an image with depth information acquired by a D camera into a Lab color space, then taking the a value and the b value of all possible lighting colors of a round traffic light as thresholds, and possibly dividing the image with the depth information acquired by a 3D camera into two parts within the threshold range: a region that may be a circular traffic light and a background region, wherein the values of a and b of the pixels in the region that may be a circular traffic light are within the threshold range.
By way of example, all possible light colors of a round traffic light are red, green and yellow. Wherein, the threshold range of green is: -50 < a < -8 and 15 < b < 80; the threshold range for red is: 15 < a < 110 and 15 < b < 60; the threshold range for yellow is: 1 < a < 16 and 25 < b < 60. How to determine the ranges of the values a and b of the three colors of red, green and yellow can refer to the related art, and will not be described herein again.
A second possible implementation of obtaining the target area is: in order to reduce the influence of illumination conditions such as sunlight on an image processing result, the hue value of the pixel point is utilized to carry out color segmentation. The hue value of all possible light colors of the round traffic light is taken as a threshold value, and the round traffic light is possible within the threshold value range, so that the background frame area of the round traffic light can be divided into two parts: a region that may be a circular traffic light and a background region, wherein tone values of pixel points within the region that may be a circular traffic light are within the threshold range.
By way of example, all possible lighting colors for round traffic are red, green and yellow. Wherein the hue value range of red is less than 6 or greater than 244; the hue value range for green is between 81 and 130 and the hue value range for yellow is between 21 and 46. How to determine the hue value ranges of the three colors of red, green and yellow can refer to the related art, and will not be described herein again.
Considering that H (hue) and V (brightness) in the HSV color space have independence, an image with depth information acquired by a 3D camera may be converted into the HSV color space, and a target area may be obtained using the hue value range.
Optionally, a third possible implementation of obtaining the target area includes the following steps:
carrying out histogram equalization on an image with depth information acquired by a 3D camera;
carrying out noise reduction processing on the image subjected to histogram equalization;
performing morphological processing on the image subjected to noise reduction processing;
performing color segmentation on the image with the depth information acquired by the 3D camera;
and comparing the image after the morphological processing with the image after the color segmentation to obtain the target area.
In the present disclosure, in order to enhance color information in an image with depth information collected by a 3D camera and further highlight colors of a circular traffic light, histogram equalization may be performed on the image with depth information collected by the 3D camera.
After histogram equalization, color information in an image with depth information acquired by a 3D camera is richer, but noise in the image may be amplified, so that noise reduction processing may be performed on the image after histogram equalization. Alternatively, considering that the image with depth information acquired by the 3D camera itself contains noise, the image with depth information acquired by the 3D camera may be subjected to noise reduction processing.
In order to reserve the overall characteristics of the image with the depth information acquired by the 3D camera, denoising can be performed by adopting a Gaussian smoothing method, and after denoising processing, the overall characteristics of the image with the depth information acquired by the 3D camera are stored more completely.
Considering that the area of the circular traffic light is small and the color difference with other surrounding areas is large, an image with depth information, an image after histogram equalization or an image after noise reduction acquired by a 3D camera may be first converted into a gray-scale image, and then the obtained gray-scale image may be subjected to morphological processing, such as top hat operation or top hat operation (TopHat), to filter out large areas and dark areas with the same or similar colors, and to leave small bright areas. Therefore, a large amount of background information in the color information is sufficiently filtered, and the calculation amount can be saved for the subsequent color segmentation operation.
After morphological processing, an OTSU (maximum inter-class variance method) can be adopted to calculate a binarization threshold value, and then the calculated binarization threshold value is used for binarization processing, so that an area which may be a round traffic light is separated.
In practical applications, the histogram equalization step, the noise reduction processing step, and the binarization processing step are optional, and may be executed alternatively, selectively, or all or none of them, and whether to execute or not may be selected according to the requirements of recognition efficiency and accuracy.
The color segmentation of the image with the depth information acquired by the 3D camera may refer to the first implementation manner or the second implementation manner of acquiring the target area, and other possible implementation manners, and no matter which implementation manner is adopted, the area covered by all possible lighting colors of the circular traffic light may be separated from the image with the depth information acquired by the 3D camera.
To further determine the area where the circular traffic light is located, excluding areas that are not likely to be the circular traffic light, the morphologically processed image and the color segmented image may be compared to separate a region having a color and an area that are consistent with the characteristics of the circular traffic light (i.e., the target region) from the morphologically processed image, excluding regions having an area that is consistent with the color but is not consistent with the circular traffic light. Next, the morphological processed image including the separated region is subjected to binarization processing.
Of course, if the morphological-processed image is also subjected to binarization processing, the binarized image and the color-segmented image are compared to separate a region (i.e., a target region) having both colors and areas that meet the characteristics of a circular traffic light from the binarized image.
After the target area is obtained, the target area is further screened according to the shape characteristics (such as the length-width ratio) of the circular traffic light. The aspect ratio of a circular traffic light should ideally be 1: 1, and if the length and aspect ratio of the target area is very different, it can be determined that the target area is not a circular traffic light. In practical applications, an aspect ratio threshold range may be set, for example: 0.7 to 1.4, if the aspect ratio of the target area exceeds the threshold range, the area is not considered to be a circular lamp and can be filtered out.
After the target area is filtered, the target area with the shape characteristics conforming to the circular traffic light is further screened by adopting a Hu characteristic matching method so as to extract the circular target area. Referring to fig. 2, fig. 2 is a schematic diagram illustrating Hu feature matching according to an exemplary embodiment. The matching process is as follows:
first, the Hu characteristics of the circular traffic light template are calculated, and as shown in fig. 2, the graph numbered (a) is the circular traffic light template.
Then, for a target area whose shape characteristics conform to the shape of a circular traffic light, framing is performed with a minimum bounding rectangle. And calculating the upper, lower, left and right boundary values of the area, and taking the boundary values as framed boundaries to further obtain the minimum circumscribed rectangle of the area. As shown in fig. 2, the graphs numbered (b) and (c) are the smallest circumscribed rectangles whose shape features conform to the target area of the circular traffic lamp, respectively.
And then, calculating the Hu characteristic of the minimum circumscribed rectangle, matching the Hu characteristic with the Hu characteristic of the circular traffic light template, and determining whether the target area of which the shape characteristic accords with the circular traffic light is circular or not according to the matching degree. Referring to fig. 3, fig. 3 is a schematic diagram of a circle determined according to the Hu feature matching method.
The above is a process of determining the circular target area by performing step S11 to process the color information in the image with the depth information acquired by the 3D camera. The following describes a process of executing step S12, that is, a process of processing depth information in an image with depth information acquired by a 3D camera, and further determining a target depth image.
The 3D camera is utilized to collect images, the depth information of the images can be obtained, namely, each pixel point in the images has distance information, proper screening is carried out according to a preset depth threshold range, and some interference areas can be eliminated. One possible setting of the preset depth threshold range is as follows: according to the GB 14886-2006 road traffic signal lamp setting and installation specification, a target region which is too close or too far in an image is an interference region, and a region in a proper range (for example, a range of 50-200 m) is a target depth image which needs to be extracted. Optionally, the pixel points meeting the preset depth threshold range may be set to be white, and the pixel points exceeding the preset depth threshold range may be set to be black.
Alternatively, in order to facilitate distinguishing between different objects photographed by the 3D camera, the preset ranges may be ranked. Referring to fig. 4, fig. 4 is a schematic diagram illustrating extraction of a target depth image according to an exemplary embodiment. Illustratively, 5-100 m is selected as a preset depth threshold range, pixel points with depth values of 5-100 m are set to be white, and pixel points with depth values exceeding 5-100 m are set to be black. The depth of the same level is arranged in the same gray level (in FIG. 4, we all use white color to represent the areas with different depth values, but use numbers to represent the areas with different depth values), for example, 3 and 4 in FIG. 4 represent the front and the back cars. Wherein the preset distance can be set according to the length of the vehicle, so as to compare fig. 3 and fig. 4, and then eliminate the interference caused by the vehicle lamp.
Optionally, considering that some fine protrusions (such as a connection between a traffic light background frame and a traffic light pole) exist in the extracted target depth image, noise interference exists around the traffic light background frame. In order to eliminate noise interference and smooth the outline of the traffic light background frame, morphological opening operation processing can be carried out on the target depth image to remove fine protrusions. As shown in fig. 4, the traffic light background frame is separated from the traffic light pole after the morphological opening operation processing is represented by X, and the traffic light pole part is removed.
After the circular target region is determined and the target depth image is determined, steps S13 and S14 are performed.
Wherein, step S13 includes: comparing the circular target area with the target depth image to extract a background area containing the circular target area from the target depth image; and positioning a rectangular background frame from the background area according to the rectangularity of the background area.
And (3) corresponding the circular target area to the target depth image, if the target depth image does not have an area corresponding to the circular target area, considering the target depth image as an interference object (such as a road sign, a car without a driving light and the like), and obtaining a background area after eliminating the interference object. As shown in fig. 4, the regions labeled 5 and 6 in fig. 4 may be excluded.
And calculating the squareness degree R of the background area, wherein the value of R is usually between 0 and 1. When the object is rectangular, R takes a maximum value of 1. A rectangle degree threshold (e.g. 0.9) may be set, that is, when the rectangle degree of the background area is greater than the set rectangle degree threshold, the background area is considered to be a rectangular background frame. Regions that do not meet the rectangle degree threshold may be excluded, such as the region labeled 3 in FIG. 4.
After the rectangular background frame is located, step S14 may be performed. Step S14 includes: determining the ratio of the area of the rectangular background frame to the area of a circular target area contained in the rectangular background frame; if the ratio accords with the preset range, the rectangular background frame is a circular traffic light background frame, and a circular target area contained in the rectangular background frame is a circular traffic light.
In one aspect, the area A of the minimum bounding rectangle of a circular target region is calculatedMEROn the other hand, the area A of the rectangular background frame is calculatedOThen calculate AOAnd AMERThe ratio is K and is expressed by the following formula:
Figure DEST_PATH_GDA0001178085750000131
considering that there are 3 or 4 circular traffic lights in the background frame of the circular traffic lights under normal conditions, K should be between 3 and 4.5. This may exclude street lamps (as in the region labeled 1 in fig. 4), cars (as in the regions labeled 4 and 5 in fig. 3, which may not be round traffic lights because the areas of the regions labeled 4 and 5 in fig. 3 are small relative to the area of the region labeled 4 in fig. 4), and so on.
Therefore, the preset range can be set to be 3-4.5, if the ratio is within the preset range, the rectangular background frame is a circular traffic light background frame, and meanwhile, the circular target area in the rectangular background frame is a circular traffic light. By adopting the method, the round traffic light background frame can be positioned, and the round traffic light can also be positioned.
After the round traffic light is positioned, the light color of the round traffic light can be determined by combining the result of color segmentation, and then the type of the round traffic light is determined.
The present disclosure also provides a device for recognizing a circular traffic light. Referring to fig. 5, fig. 5 is a schematic diagram illustrating an apparatus for identifying a circular traffic light according to an exemplary embodiment. As shown in fig. 5, theapparatus 500 includes:
a circular targetregion segmentation module 501, configured to segment a circular target region with any color characteristic of red, green, or yellow from an image with depth information acquired by a 3D camera;
a target depthimage extraction module 502, configured to extract a target depth image that meets a preset depth threshold range from an image with depth information acquired by a 3D camera;
a rectangular backgroundframe positioning module 503, configured to compare the circular target area with the target depth image to locate a rectangular background frame from the target depth image;
a circular trafficlight determining module 504, configured to determine, according to a size relationship between an area of the rectangular background frame and an area of a circular target region included in the rectangular background frame, that the circular target region included in the rectangular background frame is a circular traffic light;
a circular traffic lighttype determination module 505 for determining the type of the circular traffic light in combination with the color characteristics of the circular traffic light.
Optionally, the circular traffic light determination module comprises:
the area ratio determining submodule is used for determining the ratio of the area of the rectangular background frame to the area of a circular target area contained in the rectangular background frame;
and the round traffic light determination submodule is used for determining that the rectangular background frame is a round traffic light background frame if the ratio accords with the preset range, and the round target area contained in the rectangular background frame is a round traffic light.
Optionally, the circular target region segmentation module includes: the target area determining submodule is used for segmenting a target area which is smaller than a preset threshold and has any color characteristic of red, green or yellow from an image with depth information acquired by the 3D camera;
the filtering submodule is used for filtering out a non-circular area according to the length-width ratio of the target area;
the judging submodule is used for judging whether the filtered Hu characteristics of the target area are matched with the Hu characteristics of the circular traffic light template or not;
and the first extraction submodule is used for extracting the successfully matched region from the filtered target region.
Optionally, the apparatus further comprises: the histogram equalization module is used for performing histogram equalization on the image with the depth information acquired by the 3D camera;
the noise reduction module is used for carrying out noise reduction processing on the image subjected to histogram equalization;
the morphological processing module is used for carrying out morphological processing on the image subjected to the noise reduction processing;
the target area determination submodule includes:
the segmentation submodule is used for carrying out color segmentation on the image with the depth information acquired by the 3D camera;
and the comparison submodule is used for comparing the image after the morphological processing with the image after the color segmentation so as to obtain the target area.
Optionally, the rectangular background frame positioning module includes:
a background region extraction sub-module, configured to compare the circular target region with the target depth image to extract a background region including the circular target region from the target depth image;
and the rectangular background frame positioning sub-module is used for positioning the rectangular background frame from the background area according to the rectangularity of the background area.
With regard to the apparatus in the above embodiments, the specific manner in which each module and unit performs operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
In addition, the invention also provides a vehicle, which can comprise a 3D camera for collecting images with depth information; and an apparatus for identifying a round traffic light provided in accordance with the present disclosure.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (9)

1. A method of identifying a circular traffic light, the method comprising:
a circular target area with any color characteristic of red, green or yellow is segmented from an image with depth information acquired by a 3D camera;
extracting a target depth image meeting a preset depth threshold range from an image with depth information acquired by a 3D camera;
comparing the circular target area with the target depth image to locate a rectangular background frame from the target depth image;
determining that the circular target area contained in the rectangular background frame is a circular traffic light according to the size relation between the area of the rectangular background frame and the area of the circular target area contained in the rectangular background frame;
determining the type of the round traffic light by combining the color characteristics of the round traffic light;
wherein comparing the circular target region and the target depth image to locate a rectangular background frame from the target depth image comprises:
comparing the circular target area with the target depth image to extract a background area containing the circular target area from the target depth image;
and positioning a rectangular background frame from the background area according to the rectangularity of the background area.
2. The method of claim 1, wherein determining that the circular target area contained in the rectangular background frame is a circular traffic light according to a size relationship between the area of the rectangular background frame and the area of the circular target area contained in the rectangular background frame comprises:
determining the ratio of the area of the rectangular background frame to the area of a circular target area contained in the rectangular background frame;
if the ratio accords with the preset range, the rectangular background frame is a circular traffic light background frame, and a circular target area contained in the rectangular background frame is a circular traffic light.
3. The method of claim 1, wherein segmenting a circular target region having any one of red, green, or yellow color characteristics from an image with depth information acquired by a 3D camera comprises:
segmenting a target area with any color characteristic of red, green or yellow from an image with depth information acquired by a 3D camera;
filtering out a non-circular area according to the length-width ratio of the target area;
judging whether the filtered Hu characteristics of the target area are matched with the Hu characteristics of the circular traffic light template or not;
and extracting a successfully matched region from the filtered target region.
4. The method of claim 3, further comprising:
carrying out histogram equalization on an image with depth information acquired by a 3D camera;
carrying out noise reduction processing on the image subjected to histogram equalization;
performing morphological processing on the image subjected to noise reduction processing;
the method for segmenting the target area with any color characteristic of red, green or yellow from the image with the depth information acquired by the 3D camera comprises the following steps:
performing color segmentation on the image with the depth information acquired by the 3D camera;
and comparing the image after the morphological processing with the image after the color segmentation to obtain the target area.
5. An apparatus for identifying a circular traffic light, the apparatus comprising:
the circular target area segmentation module is used for segmenting a circular target area with any color characteristic of red, green or yellow from an image with depth information acquired by the 3D camera;
the target depth image extraction module is used for extracting a target depth image meeting a preset depth threshold range from an image with depth information acquired by the 3D camera;
the rectangular background frame positioning module is used for comparing the circular target area with the target depth image so as to position a rectangular background frame from the target depth image;
the round traffic light determining module is used for determining that the round target area contained in the rectangular background frame is a round traffic light according to the size relation between the area of the rectangular background frame and the area of the round target area contained in the rectangular background frame;
the circular traffic light type determining module is used for determining the type of the circular traffic light by combining the color characteristics of the circular traffic light;
the rectangular background frame positioning module comprises:
a background region extraction sub-module, configured to compare the circular target region with the target depth image to extract a background region including the circular target region from the target depth image;
and the rectangular background frame positioning sub-module is used for positioning the rectangular background frame from the background area according to the rectangularity of the background area.
6. The apparatus of claim 5, wherein the circular traffic light determination module comprises:
the area ratio determining submodule is used for determining the ratio of the area of the rectangular background frame to the area of a circular target area contained in the rectangular background frame;
and the round traffic light determination submodule is used for determining that the rectangular background frame is a round traffic light background frame if the ratio accords with the preset range, and the round target area contained in the rectangular background frame is a round traffic light.
7. The apparatus of claim 5, wherein the circular target region segmentation module comprises:
the target area determining submodule is used for segmenting a target area which is smaller than a preset threshold and has any color characteristic of red, green or yellow from an image with depth information acquired by the 3D camera;
the filtering submodule is used for filtering out a non-circular area according to the length-width ratio of the target area;
the judging submodule is used for judging whether the filtered Hu characteristics of the target area are matched with the Hu characteristics of the circular traffic light template or not;
and the first extraction submodule is used for extracting the successfully matched region from the filtered target region.
8. The apparatus of claim 7, further comprising:
the histogram equalization module is used for performing histogram equalization on the image with the depth information acquired by the 3D camera;
the noise reduction module is used for carrying out noise reduction processing on the image subjected to histogram equalization;
the morphological processing module is used for carrying out morphological processing on the image subjected to the noise reduction processing;
the target area determination submodule includes:
the segmentation submodule is used for carrying out color segmentation on the image with the depth information acquired by the 3D camera;
and the comparison submodule is used for comparing the image after the morphological processing with the image after the color segmentation so as to obtain the target area.
9. A vehicle, characterized in that the vehicle comprises:
the 3D camera is used for acquiring an image with depth information; and
the device for identifying a circular traffic light according to any one of claims 5-8.
CN201610874308.8A2016-09-302016-09-30 Method, device and vehicle for identifying circular traffic lightsActiveCN107886033B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610874308.8ACN107886033B (en)2016-09-302016-09-30 Method, device and vehicle for identifying circular traffic lights

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610874308.8ACN107886033B (en)2016-09-302016-09-30 Method, device and vehicle for identifying circular traffic lights

Publications (2)

Publication NumberPublication Date
CN107886033A CN107886033A (en)2018-04-06
CN107886033Btrue CN107886033B (en)2021-04-20

Family

ID=61769598

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610874308.8AActiveCN107886033B (en)2016-09-302016-09-30 Method, device and vehicle for identifying circular traffic lights

Country Status (1)

CountryLink
CN (1)CN107886033B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111445511B (en)*2020-03-242022-08-05杭州东信北邮信息技术有限公司Method for detecting circle in image
CN113761967B (en)*2020-06-012024-11-29中移(苏州)软件技术有限公司Identification method and device
CN111967370B (en)*2020-08-122021-12-07广州小鹏自动驾驶科技有限公司Traffic light identification method and device
CN112201117B (en)*2020-09-292022-08-02深圳市优必选科技股份有限公司Logic board identification method and device and terminal equipment
CN112528794A (en)*2020-12-032021-03-19北京百度网讯科技有限公司Signal lamp fault identification method and device and road side equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101395645A (en)*2006-03-062009-03-25丰田自动车株式会社Image processing system and method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110229023A1 (en)*2002-11-012011-09-22Tenebraex CorporationTechnique for enabling color blind persons to distinguish between various colors
CN102176287B (en)*2011-02-282013-11-20无锡中星微电子有限公司Traffic signal lamp identifying system and method
CN102354457B (en)*2011-10-242013-10-16复旦大学General Hough transformation-based method for detecting position of traffic signal lamp
CN103177256B (en)*2013-04-022016-12-28上海理工大学Display state of traffic signal lamp recognition methods
CN103489324B (en)*2013-09-222015-09-09北京联合大学 A real-time dynamic traffic light detection and recognition method based on unmanned driving
CN104050447A (en)*2014-06-052014-09-17奇瑞汽车股份有限公司Traffic light identification method and device
CN104021378B (en)*2014-06-072017-06-30北京联合大学Traffic lights real-time identification method based on space time correlation Yu priori
CN104766046B (en)*2015-02-062018-02-16哈尔滨工业大学深圳研究生院One kind is detected using traffic mark color and shape facility and recognition methods
CN105913041B (en)*2016-04-272019-05-24浙江工业大学Signal lamp identification method based on pre-calibration

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101395645A (en)*2006-03-062009-03-25丰田自动车株式会社Image processing system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于图像处理的交通信号灯识别方法";武莹等;《交通信息与安全》;20111231;第29卷(第3期);第51-53页第1-4节*

Also Published As

Publication numberPublication date
CN107886033A (en)2018-04-06

Similar Documents

PublicationPublication DateTitle
CN107886033B (en) Method, device and vehicle for identifying circular traffic lights
CN107506760B (en)Traffic signal detection method and system based on GPS positioning and visual image processing
KR101409340B1 (en)Method for traffic sign recognition and system thereof
EP2575077B1 (en)Road sign detecting method and road sign detecting apparatus
CN104769652B (en)Method and system for detecting traffic lights
Gomez et al.Traffic lights detection and state estimation using hidden markov models
CN104670085B (en)Track detachment alarm system
CN109215364B (en)Traffic signal recognition method, system, device and storage medium
KR101799778B1 (en)Method and apparatus for confirmation of relevant white inner circle in environment of circular traffic sign recognition
JP2011216051A (en)Program and device for discriminating traffic light
CN107891808A (en)Driving based reminding method, device and vehicle
JP2018063680A (en) Traffic signal recognition method and traffic signal recognition apparatus
Wu et al.Raindrop detection and removal using salient visual features
CN106709412B (en) Traffic sign detection method and device
CN113989771A (en) A Traffic Signal Recognition Method Based on Digital Image Processing
CN111046741A (en)Method and device for identifying lane line
CN107886034A (en)Driving based reminding method, device and vehicle
CN106778736A (en)The licence plate recognition method and its system of a kind of robust
CN107886035B (en) Method, device and vehicle for recognizing arrow traffic lights
CN108446668A (en) Traffic signal light detection and recognition method and system based on unmanned driving platform
JP7264428B2 (en) Road sign recognition device and its program
CN112598674B (en)Image processing method and device for vehicle and vehicle
CN107992788B (en) Method, device and vehicle for identifying traffic lights
CN109800693B (en) A night-time vehicle detection method based on color channel mixing features
CN104778454A (en)Night vehicle tail lamp extraction method based on descending luminance verification

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp