Movatterモバイル変換


[0]ホーム

URL:


CN104517103A - Traffic sign classification method based on deep neural network - Google Patents

Traffic sign classification method based on deep neural network
Download PDF

Info

Publication number
CN104517103A
CN104517103ACN201410841539.XACN201410841539ACN104517103ACN 104517103 ACN104517103 ACN 104517103ACN 201410841539 ACN201410841539 ACN 201410841539ACN 104517103 ACN104517103 ACN 104517103A
Authority
CN
China
Prior art keywords
neural network
layer
traffic sign
method based
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410841539.XA
Other languages
Chinese (zh)
Inventor
贺庆
冷斌
官冠
胡欢
蒋东国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Institute of Advanced Technology of CAS
Original Assignee
Guangzhou Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Institute of Advanced Technology of CASfiledCriticalGuangzhou Institute of Advanced Technology of CAS
Priority to CN201410841539.XApriorityCriticalpatent/CN104517103A/en
Publication of CN104517103ApublicationCriticalpatent/CN104517103A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于深度神经网络的交通标志分类方法,包括有以下步骤:A、基于光流方法的运动目标检测方法对读入的视频进行检测,当检测到有运动物体时,提取出感兴趣区域;B、利用固定大小的块对提取出的感兴趣区域进行分块处理;C、对分块处理后的图片进行缩放处理,转换成同样大小的图片;D、将转换后的图片作为输入,利用卷积神经网络进行分类。本发明方法对经过运动检测后的图像提起感兴趣区域,进而进行分块处理,并将得到的图片转换成同样大小后利用卷积神经网络进行处理,避免了人为假设类条件密度函数所带来的问题,极大地加快了测试速度、提高了精度。本发明作为一种基于深度神经网络的交通标志分类方法可广泛应用于交通领域。

The invention discloses a traffic sign classification method based on a deep neural network, which includes the following steps: A. A moving object detection method based on an optical flow method detects a read-in video, and when a moving object is detected, extract Region of interest; B. Use fixed-size blocks to block the extracted region of interest; C. Scale the image after block processing and convert it into a picture of the same size; D. Convert the converted image As input, a convolutional neural network is utilized for classification. The method of the present invention lifts the region of interest on the image after motion detection, and then performs block processing, and converts the obtained image into the same size and then uses the convolutional neural network for processing, avoiding the artificial assumption of the conditional density function. The problem greatly speeds up the test speed and improves the accuracy. As a traffic sign classification method based on a deep neural network, the present invention can be widely used in the traffic field.

Description

Translated fromChinese
一种基于深度神经网络的交通标志分类方法A Traffic Sign Classification Method Based on Deep Neural Network

技术领域technical field

本发明涉及交通领域,尤其是一种基于深度神经网络的交通标志分类方法。The invention relates to the traffic field, in particular to a traffic sign classification method based on a deep neural network.

背景技术Background technique

随着城市化的进展以及汽车的普及,机动车数量大量增加,交通拥挤加剧,交通事故频发,公路交通的安全以及运输效率问题变得日益突出。而基于计算机视觉的驾驶员支持系统是解决交通安全和运输效率问题的重要措施之一,在智能交通系统中逐渐获得应用,它的研究大致在道路识别、碰撞识别、交通标志识别等三方面进行。在道路识别、碰撞识别方面研究较早,也取得许多好的结果;但在交通标志识别方面研究较少,由于交通标志中包含许多重要的交通信息,如驾驶前方道路状况的改变、速度限制、驾驶行为限制等信息,适时提供这些信息给驾驶员有利于驾驶员 适时反应,保证驾驶安全,避免交通事故的发生,具有重要的意义 。With the progress of urbanization and the popularization of automobiles, the number of motor vehicles has increased significantly, traffic congestion has intensified, traffic accidents have occurred frequently, and the problems of road traffic safety and transportation efficiency have become increasingly prominent. The driver support system based on computer vision is one of the important measures to solve the problems of traffic safety and transportation efficiency. It is gradually applied in the intelligent transportation system. Its research is generally carried out in three aspects: road recognition, collision recognition, and traffic sign recognition. . The research on road recognition and collision recognition is earlier, and many good results have been achieved; however, there is less research on traffic sign recognition, because traffic signs contain many important traffic information, such as changes in road conditions ahead, speed limits, It is of great significance to provide information such as driving behavior restrictions and other information in a timely manner to help the driver respond in a timely manner, ensure driving safety, and avoid traffic accidents.

从交通标志的国家标准中,可以获得以下的先验知识:交通标志可以根据它的颜色进行分类:交通标志一般可分为警告、禁令、指示和指路标志等三类,每类交通标志具有不同的颜色。交通标志的形状、尺寸大小以及内部包含的字符、数字、几何图样等在标准中都有规定。交通标志通常安装在道路的右边,距离路边 2 ~4.5m 的位置 。总之,利用这一知识,可减少搜索空间,大大加快交通标志识别的处理速度 。From the national standards of traffic signs, the following prior knowledge can be obtained: traffic signs can be classified according to its color: traffic signs can generally be divided into three categories: warning, prohibition, instruction and road sign, each type of traffic sign has different color. The shape, size, and characters, numbers, and geometric patterns contained in traffic signs are specified in the standards. Traffic signs are usually installed on the right side of the road, 2 ~ 4.5m away from the roadside. In conclusion, using this knowledge, the search space can be reduced and the processing speed of traffic sign recognition can be greatly accelerated.

交通标志识别的难点:交通标志识别是通过安装在汽车上的摄影机摄取户外自然场景中交通标志图像,输入计算机进行处理完成的,它比一般的非自然场景下的目标志别更具挑战性,原因在于自然场景中存在多种因素影响交通标志的识别效果和执行效率:(1)户外自然场景中光照条件是变化的且不可控;(2)由于交通标志的运动和震动,使得交通标志的图像产生模糊;(3)交通标志牌安置在户外,由于天气条件、乱涂乱画以及灰尘影响而导致毁损;(4)虽然交通标志制作有国际标准,但各国执行的却是本国的国家标准,因而不能用国际标准作为分类的样本库;(5)交通标志的识别必须能应用在实时环境中。Difficulties in traffic sign recognition: Traffic sign recognition is done by capturing images of traffic signs in outdoor natural scenes through a camera installed on a car, and inputting them into a computer for processing. It is more challenging than the target signs in general unnatural scenes. The reason is that there are many factors in natural scenes that affect the recognition effect and execution efficiency of traffic signs: (1) the lighting conditions in outdoor natural scenes are changing and uncontrollable; (2) due to the movement and vibration of traffic signs, the traffic signs The image is blurred; (3) The traffic signs are placed outdoors and are damaged due to weather conditions, graffiti and dust; (4) Although there are international standards for the production of traffic signs, each country implements its own national standards , so the international standard cannot be used as the sample library for classification; (5) The recognition of traffic signs must be able to be applied in a real-time environment.

近些年来,我国的大量研究机构、学校也都纷纷参与到交通标志识别研究这一领域当中,并且也取得了一定的研究成果。例如:In recent years, a large number of research institutions and schools in our country have also participated in the field of traffic sign recognition research, and have also achieved certain research results. For example:

1、广东工业大学的杨海东等人提出的一种基于SURF的交通标志识别方法及系统(CN103544484A),该方法提高了交通标志识别的效率;1. A SURF-based traffic sign recognition method and system (CN103544484A) proposed by Yang Haidong and others from Guangdong University of Technology, which improves the efficiency of traffic sign recognition;

2、广东工业大学的蔡念、梁文昭等人提出的一种低照度场景下的户外交通标志识别方法(CN102881160A),该发明为一种鲁棒性较强、准确率较高的户外交通标志识别方法;2. An outdoor traffic sign recognition method in low-illuminance scenes proposed by Cai Nian, Liang Wenzhao and others from Guangdong University of Technology (CN102881160A). This invention is an outdoor traffic sign with strong robustness and high accuracy recognition methods;

3、北京交通大学的袁雪、张晖等提出的一种交通标志识别方法(CN102799859A),该发明的方法不仅保留了SIFT特征对于图像尺度变化和旋转具有不变性的优点,而且使提取的特征量更便于判别颜色及空间位置特征,对于色彩丰富及空间位置分布变化各异的交通标志识别极为有效;3. A traffic sign recognition method proposed by Yuan Xue and Zhang Hui of Beijing Jiaotong University (CN102799859A). The invented method not only retains the advantages of SIFT feature invariance to image scale change and rotation, but also makes the extracted features It is more convenient to distinguish the color and spatial location characteristics, and it is extremely effective for the traffic sign recognition with rich colors and different spatial location distributions;

4、浙江大学的王东辉、邓霄等提出的一种基于稀疏表达和字典学习进行交通标志识别的方法(CN102024152A),该发明利用稀疏表达和概率方法实现交通标志图片的分类,达到较高的交通标志识别率;4. A method for traffic sign recognition based on sparse representation and dictionary learning proposed by Wang Donghui and Deng Xiao of Zhejiang University (CN102024152A). This invention uses sparse representation and probability methods to classify traffic sign images and achieve higher traffic logo recognition rate;

5、奇瑞汽车股份有限公司的孙锐、王继贞等人提出的一种多特征的分层交通标志识别方法(CN103390167A),该方法通过基于颜色的侦测方法解决了交通标志识别中准确率不高、实时性差的问题。5. A multi-feature layered traffic sign recognition method (CN103390167A) proposed by Sun Rui and Wang Jizhen of Chery Automobile Co., Ltd. This method solves the problem of low accuracy in traffic sign recognition through color-based detection methods , The problem of poor real-time performance.

总之,现有技术中交通标志识别一般包括侦测和分类两个模块,侦测阶段一般是利用交通标志的色彩或形状特征侦测出可能包含交通标志的区域,然后将感兴趣的区域进行大小规则化,在分类阶段进一步判定交通标志区域的有效性并识别出交通标志的含义。In short, traffic sign recognition in the prior art generally includes two modules of detection and classification. In the detection stage, the color or shape characteristics of traffic signs are generally used to detect areas that may contain traffic signs, and then the area of interest is sized Regularization, the validity of the traffic sign area is further judged in the classification stage and the meaning of the traffic sign is identified.

侦测方法可分为基于颜色的侦测和基于形状的侦测两类。基于颜色的侦测方法:颜色信息具有大小和视角不变性,而且有较强的可分离性,故颜色信息对于交通标志的侦测是非常重要的,在几乎所有的交通标志识别系统中都利用了颜色信息。基于颜色的侦测方法是最为基本的侦测方法,它通过在摄取到的图像中对交通标志典型颜色进行分割,侦测出感兴趣的区域。这类方法中又可以分为三类:Detection methods can be divided into two categories: color-based detection and shape-based detection. Color-based detection method: color information has size and viewing angle invariance, and has strong separability, so color information is very important for the detection of traffic signs, and is used in almost all traffic sign recognition systems color information. The color-based detection method is the most basic detection method, which detects the region of interest by segmenting the typical colors of traffic signs in the captured image. These methods can be divided into three categories:

(1) 彩色阈值分割法:在这类算法中,色彩空间的选择是很重要的。最直观的是选择 RGB空间,直接通过设定的阈值进行分割。(1) Color threshold segmentation method: In this type of algorithm, the choice of color space is very important. The most intuitive way is to choose the RGB space and directly segment through the set threshold.

(2)基于神经网络学习的方法。为克服空间转换的非线性以及噪声的影响,可采用基于神经网络学习的方法。这类方法由于采用离线训练,在线侦测,实时性较好,而且具有一定的泛化能力,可降低噪声的影响;但缺点在于神经网络的构造、隐节的个数和层数的选择都依靠训练集的代表性,而建立包含各种情况的数据库不是易事。(2) Method based on neural network learning. In order to overcome the nonlinearity of space transformation and the influence of noise, a method based on neural network learning can be used. Due to the use of offline training and online detection, this kind of method has good real-time performance, and has a certain generalization ability, which can reduce the influence of noise; but the disadvantage is that the structure of the neural network, the number of hidden nodes and the number of layers are all different. Relying on the representativeness of the training set, it is not easy to build a database containing various situations.

(3)基于视觉模型的方法。为克服各种视觉条件的影响,通过视觉模型对交通标志进行侦测,在许多项目中也获得应用。这类基于模型的方法考虑了人类视觉特点和环境条件,有一定的效果,但应用时要根据环境条件确定参数,较为复杂,而且对于遮掩、交通标志的污损等情况考虑较少。基于形状的方法:虽然基于颜色的侦测方法具有直接聚焦的特点,但由于受到光照和天气变化等影响,仅仅依靠颜色信息不能精确侦测出交通标志的区域,而从机器人学的场景分析、三维物体识别、在 CAD 数据库中的部件定位研究中发展起来的利用图像梯度的基于形状的方法,可不受光照的影响,在交通标志侦测研究中获得重视。基于颜色和基于形状这两种方法的结合,是交通标志侦测研究的最合适的方法。目前,大多数基于形状的侦测方法都是建立在基于颜色的侦测方法之上。在交通标志识别方面,基于形状的方法又可以分为基于边缘轮廓方法和基于模板匹配方法。基于边缘轮廓方法是最基本的方法,目前有多种成熟的边缘提取方法可供选择,在提取出的边缘上再进行分析,但上述方法的缺点在于交通标志分类方面的精度和检测速度很难得到兼顾。(3) Method based on visual model. In order to overcome the influence of various visual conditions, the detection of traffic signs through visual models has also been applied in many projects. This type of model-based method takes into account the characteristics of human vision and environmental conditions, and has a certain effect, but it is more complicated to determine the parameters according to the environmental conditions during application, and less consideration is given to situations such as occlusion and traffic sign defacement. Shape-based method: Although the color-based detection method has the characteristics of direct focusing, due to the influence of light and weather changes, the area of traffic signs cannot be accurately detected only by color information. From the scene analysis of robotics, 3D object recognition, shape-based methods using image gradients developed in the study of part localization in CAD databases, which are independent of illumination, have gained attention in traffic sign detection research. The combination of color-based and shape-based methods is the most suitable method for traffic sign detection research. Currently, most shape-based detection methods are based on color-based detection methods. In terms of traffic sign recognition, shape-based methods can be divided into edge-based methods and template-based methods. The method based on the edge contour is the most basic method. At present, there are many mature edge extraction methods to choose from, and then analyze on the extracted edge, but the disadvantage of the above method is that the accuracy and detection speed of traffic sign classification are difficult. Get both.

发明内容Contents of the invention

为了解决上述技术问题,本发明的目的是:提供一种基于深度神经网络的高精度和快速检测的交通标志分类方法。In order to solve the above technical problems, the object of the present invention is to provide a traffic sign classification method based on deep neural network with high precision and rapid detection.

本发明所采用的技术方案是:一种基于深度神经网络的交通标志分类方法,包括有以下步骤:The technical solution adopted in the present invention is: a traffic sign classification method based on a deep neural network, comprising the following steps:

A、基于光流方法的运动目标检测方法对读入的视频进行检测,当检测到有运动物体时,提取出感兴趣区域;A. The moving target detection method based on the optical flow method detects the read-in video, and when a moving object is detected, the region of interest is extracted;

B、利用固定大小的块对提取出的感兴趣区域进行分块处理;B. The extracted region of interest is divided into blocks using blocks of a fixed size;

C、对分块处理后的图片进行缩放处理,转换成同样大小的图片;C. Perform scaling processing on the image after block processing, and convert it into an image of the same size;

D、将转换后的图片作为输入,利用卷积神经网络进行分类。D. Use the converted image as input and use a convolutional neural network for classification.

进一步,所述步骤B具体为:Further, the step B is specifically:

B1、利用固定大小的块对提取出的感兴趣区域进行分块得到分块图片;B1. Using blocks of a fixed size to block the extracted region of interest to obtain a block image;

B2、利用固定大小的块移动一个像素后,对提取出的感兴趣区域进行分块得到分块图片;B2. After using a fixed-size block to move one pixel, the extracted region of interest is divided into blocks to obtain a block picture;

B3、重复执行步骤B2得到多个分块图片。B3. Step B2 is repeatedly executed to obtain multiple block pictures.

进一步,所述步骤B中固定大小的块的大小为N×N,N的取值为50-70。Further, the size of the fixed-size block in the step B is N×N, and the value of N is 50-70.

进一步,所述步骤C中转换后图片的大小为32×32。Further, the size of the converted picture in step C is 32×32.

进一步,所述步骤D中的卷积神经网络包括有7层,依次为第一卷积层、第一下采样层、第二卷积层、第二下采样层、第三卷积层、特征向量层和输出层。Further, the convolutional neural network in step D includes 7 layers, which are sequentially the first convolutional layer, the first downsampling layer, the second convolutional layer, the second downsampling layer, the third convolutional layer, the feature Vector layer and output layer.

进一步,所述第一卷积层包括有6个28×28大小的特征图,所述第一下采样层包括有6个14×14大小的特征图,所述第二卷积层包括有16个10×10大小的特征图,所述第二下采样层包括有16个5×5大小的特征图,所述第三卷积层包括有300个神经元。Further, the first convolutional layer includes 6 feature maps with a size of 28×28, the first downsampling layer includes 6 feature maps with a size of 14×14, and the second convolutional layer includes 16 A feature map with a size of 10×10, the second downsampling layer includes 16 feature maps with a size of 5×5, and the third convolutional layer includes 300 neurons.

进一步,所述输出层包括有43个标签,所述第三卷积层的300个神经元与输出层的每一个标签全连接。Further, the output layer includes 43 labels, and the 300 neurons of the third convolutional layer are fully connected to each label of the output layer.

本发明的有益效果是:本发明方法对经过运动检测后的图像提起感兴趣区域,进而进行分块处理,并将得到的图片转换成同样大小后利用卷积神经网络进行处理,避免了人为假设类条件密度函数所带来的问题,极大程度地加快了测试速度、提高了精度。The beneficial effects of the present invention are: the method of the present invention raises the region of interest in the image after motion detection, and then performs block processing, and converts the obtained picture into the same size and then uses the convolutional neural network for processing, avoiding artificial assumptions The problems caused by the similar conditional density function greatly speed up the test speed and improve the accuracy.

附图说明Description of drawings

图1为本发明方法的步骤流程图;Fig. 1 is the flow chart of the steps of the inventive method;

图2为本发明方法中神经网络的分层示意图;Fig. 2 is the layered schematic diagram of neural network in the inventive method;

图3为本发明方法中卷积过程示意图。Fig. 3 is a schematic diagram of the convolution process in the method of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明的具体实施方式作进一步说明:The specific embodiment of the present invention will be further described below in conjunction with accompanying drawing:

参照图1,一种基于深度神经网络的交通标志分类方法,包括有以下步骤:With reference to Fig. 1, a kind of traffic sign classification method based on deep neural network comprises the following steps:

A、基于光流方法的运动目标检测方法对读入的视频进行检测,当检测到有运动物体时,提取出感兴趣区域;A. The moving target detection method based on the optical flow method detects the read-in video, and when a moving object is detected, the region of interest is extracted;

B、利用固定大小的块对提取出的感兴趣区域进行分块处理;B. The extracted region of interest is divided into blocks using blocks of a fixed size;

C、对分块处理后的图片进行缩放处理,转换成同样大小的图片;C. Perform scaling processing on the image after block processing, and convert it into an image of the same size;

D、将转换后的图片作为输入,利用卷积神经网络进行分类。D. Use the converted image as input and use a convolutional neural network for classification.

卷积神经网络(CNN,Convolutional Neural Networks)是人工神经网络的一种,已成为当前语音分析和图像识别领域的研究热点。它的权值共享网络结构使之更类似于生物神经网络,降低了网络模型的复杂度,减少了权值的数量。其网络结构如图2所示。Convolutional Neural Networks (CNN, Convolutional Neural Networks) is a kind of artificial neural network, which has become a research hotspot in the field of speech analysis and image recognition. Its weight sharing network structure makes it more similar to biological neural networks, reducing the complexity of the network model and reducing the number of weights. Its network structure is shown in Figure 2.

进一步作为优选的实施方式,所述步骤B具体为:Further as a preferred embodiment, the step B is specifically:

B1、利用固定大小的块对提取出的感兴趣区域进行分块得到分块图片;B1. Using blocks of a fixed size to block the extracted region of interest to obtain a block image;

B2、利用固定大小的块移动一个像素后,对提取出的感兴趣区域进行分块得到分块图片;B2. After using a fixed-size block to move one pixel, the extracted region of interest is divided into blocks to obtain a block picture;

B3、重复执行步骤B2得到多个分块图片。B3. Step B2 is repeatedly executed to obtain multiple block pictures.

进一步作为优选的实施方式,所述步骤B中固定大小的块的大小为N×N,N的取值为50-70。As a further preferred embodiment, the size of the fixed-size block in step B is N×N, and the value of N is 50-70.

进一步作为优选的实施方式,所述步骤C中转换后图片的大小为32×32。As a further preferred implementation, the size of the converted picture in step C is 32×32.

参照图2,进一步作为优选的实施方式,所述步骤D中的卷积神经网络包括有7层,依次为第一卷积层C1、第一下采样层S2、第二卷积层C3、第二下采样层S4、第三卷积层C5、特征向量层F6(图2中未标出)和输出层output。Referring to Fig. 2, further as a preferred embodiment, the convolutional neural network in the step D includes 7 layers, which are successively the first convolutional layer C1, the first downsampling layer S2, the second convolutional layer C3, the first convolutional layer The second downsampling layer S4, the third convolutional layer C5, the feature vector layer F6 (not marked in Figure 2) and the output layer output.

参照图3,其卷积过程包括:用一个可训练的滤波器fx去卷积一个输入的图像(第一阶段是输入的图像,后面的阶段就是卷积特征图了),然后加一个偏置bx,得到卷积层Cx。子采样过程包括:每邻域四个像素求和变为一个像素,然后通过标量Wx+1加权,再增加偏置bx+1,然后通过一个sigmoid激活函数,产生一个大概缩小四倍的特征映射图Sx+1。所以从一个平面到下一个平面的映射可以看作是做卷积运算,下采样层可看作是模糊滤波器,起到二次特征提取的作用。隐层与隐层之间空间分辨率递减,而每层所含的平面数递增,这样可用于检测更多的特征信息。Referring to Figure 3, the convolution process includes: deconvolute an input image with a trainable filter fx (the first stage is the input image, and the subsequent stage is the convolution feature map), and then add a bias Set bx to get the convolutional layer Cx . The sub-sampling process includes: the sum of four pixels per neighborhood becomes one pixel, then weighted by the scalar Wx+1 , and the bias bx+1 is added, and then a sigmoid activation function is used to generate a four-fold reduction Feature map Sx+1 . Therefore, the mapping from one plane to the next can be regarded as a convolution operation, and the downsampling layer can be regarded as a fuzzy filter, which plays the role of secondary feature extraction. The spatial resolution between hidden layers decreases, while the number of planes contained in each layer increases, which can be used to detect more feature information.

用一个固定大小的卷积核去感知输入图像中的每一个神经元(即每个像素),卷积后在C1层产生特征图,然后特征图中每组的四个像素再进行求和,加权值,加偏置,通过一个sigmoid函数得到S2层的特征图,这些特征图再经过卷积得到C3层。这个层级结构再和S2一样产生S4。将S4层的每一个特征图与卷积层C5中的每一个神经元连接,这样可以防止过拟合的发生。最终,这些像素值在特征向量层F6被光栅化,并连接成一个向量输入到传统的神经网络,得到输出。Use a fixed-size convolution kernel to perceive each neuron (that is, each pixel) in the input image. After convolution, a feature map is generated in the C1 layer, and then the four pixels in each group of feature maps are summed. Weighted value, plus bias, through a sigmoid function to get the feature map of the S2 layer, and these feature maps are then convolved to get the C3 layer. This hierarchy then produces S4 as S2 does. Connect each feature map of the S4 layer to each neuron in the convolutional layer C5, which can prevent the occurrence of overfitting. Finally, these pixel values are rasterized in the feature vector layer F6, and connected into a vector input to the traditional neural network to obtain the output.

一般地,C层为特征提取层,即卷积层,用一个由权值组成的卷积核去感知前面一层的每个特征图,这就提取出了图像的特征,并且生成该卷积层的特征图;S层是下采样层,网络的每个计算层由多个特征映射组成,每个特征映射为一个平面,平面上所有神经元的权值相等。特征映射结构采用影响核函数小的sigmoid函数作为卷积网络的激活函数,使得特征映射具有位移不变性。尤其是在每一层使用的卷积核是完全一样的,这样就达到了权值共享的效果,使得整个网络的复杂度大大降低。Generally, the C layer is the feature extraction layer, that is, the convolution layer. A convolution kernel composed of weights is used to perceive each feature map of the previous layer, which extracts the features of the image and generates the convolution The feature map of the layer; the S layer is the downsampling layer, and each calculation layer of the network is composed of multiple feature maps, each feature map is a plane, and the weights of all neurons on the plane are equal. The feature map structure uses the sigmoid function with a small impact on the kernel function as the activation function of the convolutional network, so that the feature map has displacement invariance. In particular, the convolution kernels used in each layer are exactly the same, which achieves the effect of weight sharing and greatly reduces the complexity of the entire network.

本发明中卷积神经网络共有7层(不包含输入层),每层都包含可训练参数(即连接权重),并且每个层有多个特征图,每个特征图通过一种卷积核提取输入的一种特征,然后每个特征图有多个神经元。本发明中,设定输入图像为32×32大小。In the present invention, the convolutional neural network has 7 layers (not including the input layer), each layer contains trainable parameters (ie connection weights), and each layer has multiple feature maps, and each feature map passes through a convolution kernel One feature of the input is extracted, and then each feature map has multiple neurons. In the present invention, the input image is set to a size of 32×32.

C1层是一个卷积层,由6个特征图构成。特征图中每个神经元与输入中5×5的邻域相连。特征图的大小为28×28,C1层有(28×28+1)×6=4710个可训练参数(权值和偏置值),与输入层共有5×5×6×32×32=153600个连接。Layer C1 is a convolutional layer consisting of 6 feature maps. Each neuron in the feature map is connected to a 5×5 neighborhood in the input. The size of the feature map is 28×28, and the C1 layer has (28×28+1)×6=4710 trainable parameters (weights and bias values), which are 5×5×6×32×32= 153600 connections.

S2层是一个下采样层,有6个14×14大小的特征图。特征图中的每个单元与C1层中相对应特征图的2×2邻域相连接。S2层每个单元的4个输入相加,乘以一个可训练参数,再加上一个可训练偏置。通过sigmoid函数计算出结果。可训练系数和偏置控制着sigmoid函数的非线性程度。每个单元的2×2感受野并不重叠,因此S2层中每个特征图的大小是C1层中特征图大小的1/4(行和列各1/2)。S2层有(14×14+1)×6=1020个可训练参数,与C1层有6×28×28×5×5=117600个连接。The S2 layer is a downsampling layer with 6 feature maps of size 14×14. Each unit in a feature map is connected to a 2×2 neighborhood of the corresponding feature map in layer C1. The 4 inputs to each unit of the S2 layer are summed, multiplied by a trainable parameter, and added with a trainable bias. The result is calculated by the sigmoid function. The trainable coefficients and biases control how nonlinear the sigmoid function is. The 2×2 receptive fields of each unit do not overlap, so the size of each feature map in the S2 layer is 1/4 of the size of the feature map in the C1 layer (1/2 each row and column). The S2 layer has (14×14+1)×6=1020 trainable parameters and has 6×28×28×5×5=117600 connections with the C1 layer.

C3层也是一个卷积层,它同样通过5x5的卷积核去卷积层S2,然后得到的特征图就只有10×10个神经元,每一个特征图对应一种卷积核,所以它有16种不同的卷积核。这里需要注意的一点是:C3中的每个特征图是连接到S2中的所有6个或者几个特征图的,表示本层的特征图是上一层提取到的特征图的不同组合。Layer C3 is also a convolutional layer. It also uses a 5x5 convolution kernel to deconvolute layer S2, and then the obtained feature map has only 10×10 neurons. Each feature map corresponds to a convolution kernel, so it has 16 different convolution kernels. One thing to note here is that each feature map in C3 is connected to all 6 or several feature maps in S2, indicating that the feature maps of this layer are different combinations of the feature maps extracted by the previous layer.

S4层是一个下采样层,由16个5×5大小的特征图构成。特征图中的每个单元与C3中相应特征图的2×2邻域相连接,与C1和S2之间的连接一样。S4层有16×5×5+16=416个可训练参数,其和C3层一共有10×10×5×5×16=65000个连接。The S4 layer is a downsampling layer consisting of 16 feature maps of size 5×5. Each cell in a feature map is connected to a 2×2 neighborhood of the corresponding feature map in C3, the same as the connection between C1 and S2. The S4 layer has 16×5×5+16=416 trainable parameters, and it has a total of 10×10×5×5×16=65000 connections with the C3 layer.

最后,S4层与卷积层全连接,该卷几层由一个个的神经元组成,本实验用100个神经元,S4层中的每一个特征图都与该卷积层的每一个神经元全连接。最后,将卷积层C5的300个神经元与输出层每一个标签全连接,加入一个卷积层的目的在于,防止过拟合的情况发生。最后通过输出层输出得到Hw,b(X)。Finally, the S4 layer is fully connected to the convolutional layer. Several layers of the volume are composed of neurons. In this experiment, 100 neurons are used. Each feature map in the S4 layer is connected to each neuron in the convolutional layer. Fully connected. Finally, the 300 neurons of the convolutional layer C5 are fully connected to each label of the output layer, and the purpose of adding a convolutional layer is to prevent overfitting. Finally, Hw,b (X) is obtained through the output layer output.

进一步作为优选的实施方式,所述第一卷积层包括有6个28×28大小的特征图,所述第一下采样层包括有6个14×14大小的特征图,所述第二卷积层包括有16个10×10大小的特征图,所述第二下采样层包括有16个5×5大小的特征图,所述第三卷积层包括有300个神经元。As a further preferred embodiment, the first convolutional layer includes six feature maps of size 28×28, the first downsampling layer includes six feature maps of size 14×14, and the second volume The product layer includes 16 feature maps with a size of 10×10, the second downsampling layer includes 16 feature maps with a size of 5×5, and the third convolution layer includes 300 neurons.

进一步作为优选的实施方式,所述输出层包括有43个标签,所述第三卷积层的300个神经元与输出层的每一个标签全连接。As a further preferred embodiment, the output layer includes 43 labels, and the 300 neurons of the third convolutional layer are fully connected to each label of the output layer.

本发明卷积神经网络主要包括两个组成部分:训练过程、测试过程。The convolutional neural network of the present invention mainly includes two components: a training process and a testing process.

神经网络用于模式识别的主流是有监督学习,无监督学习更多的是用于聚类分析。对于有监督的模式识别,由于任一样本的类别是已知的,样本在空间的分布不再是依据其自然分布倾向来划分,而是要根据同类样本在空间的分布及不同类样本之间的分离程度找一种适当的空间划分方法,或者找到一个分类边界,使得不同类样本分别位于不同的区域内。这就需要一个长时间且复杂的学习过程,不断调整用以划分样本空间的分类边界的位置,使尽可能少的样本被划分到非同类区域中。The mainstream of neural network for pattern recognition is supervised learning, and unsupervised learning is more used for cluster analysis. For supervised pattern recognition, since the category of any sample is known, the distribution of samples in space is no longer divided according to their natural distribution tendency, but according to the distribution of similar samples in space and the distribution of samples of different types. Find an appropriate space division method, or find a classification boundary, so that different types of samples are located in different regions. This requires a long and complicated learning process, constantly adjusting the position of the classification boundary used to divide the sample space, so that as few samples as possible are divided into non-similar regions.

卷积网络在本质上是一种输入到输出的映射,它能够学习大量的输入与输出之间的映射关系,而不需要任何输入和输出之间的精确数学表达式,只要用已知的模式对卷积网络加以训练,网络就具有输入输出对之间的映射能力。卷积网络执行的是有监督训练,所以其样本集是由形如:(输入向量,理想输出向量)的向量对构成的。所有这些向量对,都应该是来源于网络即将模拟的系统的实际“运行”结果。它们可以是从实际运行系统中采集来的。在开始训练前,所有的权都应该用一些不同的小随机数进行初始化,比如[0,1]之间分布的随机数。“小随机数”用来保证网络不会因权值过大而进入饱和状态,从而导致训练失败;“不同”用来保证网络可以正常地学习。实际上,如果用相同的数去初始化权矩阵,则具有对称性,导致每一层的卷积核都相同,则网络无能力学习。Convolutional network is essentially an input-to-output mapping, which can learn a large number of mapping relationships between input and output without any precise mathematical expression between input and output, as long as the known pattern When the convolutional network is trained, the network has the ability to map between input and output pairs. The convolutional network performs supervised training, so its sample set consists of vector pairs of the form: (input vector, ideal output vector). All these vector pairs should be derived from the actual "running" results of the system that the network is about to simulate. They can be collected from the actual running system. Before starting training, all weights should be initialized with some different small random numbers, such as random numbers distributed between [0,1]. "Small random number" is used to ensure that the network will not enter a saturated state due to excessive weights, which will cause training failure; "different" is used to ensure that the network can learn normally. In fact, if the same number is used to initialize the weight matrix, it is symmetric, resulting in the same convolution kernel of each layer, and the network cannot learn.

训练算法与传统的BP算法差不多。主要包括4步,这4步被分为两个阶段:The training algorithm is similar to the traditional BP algorithm. It mainly includes 4 steps, which are divided into two stages:

第一阶段,向前传播阶段:The first stage, the forward propagation stage:

a)从样本集中取一个样本(X,Yp),将X输入网络;a) Take a sample (X, Yp ) from the sample set, and input X into the network;

b)计算相应的实际输出Opb) Compute the corresponding actual output Op .

 在此阶段,信息从输入层经过逐级的变换,传送到输出层。这个过程也是网络在完成训练后正常运行时执行的过程。在此过程中,网络执行的是计算(实际上就是输入与每层的卷积核相乘,得到最后的输出结果): Op=Fn(…(F2(F1(XpW(1))W(2))…)W(n))In this stage, information is transferred from the input layer to the output layer through a step-by-step transformation. This process is also performed when the network is running normally after training. In this process, the network performs calculations (in fact, the input is multiplied by the convolution kernel of each layer to obtain the final output): Op =Fn(...(F2(F1(XpW(1))W( 2))...)W(n))

第二阶段,向后传播阶段:The second stage, the backward propagation stage:

a)算出代价函数,即:J(W,b)= 1/2×||Op-Yp||2a) Calculate the cost function, namely: J(W,b)= 1/2×||Op -Yp ||2 ;

b)按极小化误差的方法反向传播调整权矩阵。b) Adjust the weight matrix by backpropagating the error minimization method.

本发明中训练过程首先是对样本进行搜集,本发明搜集30万个样本,其中5万限速标志图片,5万张其他禁令标志图片,5万取消禁令标志,5万张指示标志图片,其他标志图片5万张,以及5万危险标志图片;然后将这30万张图片经过卷积神经网络进行分类,得到标签结果,即包括:限速标志类、其他禁令标志类、取消禁令标志类、指示标志类、危险标志类、其他标志类,一共有43个标签。In the present invention, the training process is first to collect samples. The present invention collects 300,000 samples, including 50,000 speed limit sign pictures, 50,000 other prohibition sign pictures, 50,000 cancellation ban signs, 50,000 instruction sign pictures, and others 50,000 pictures of signs, and 50,000 pictures of danger signs; then these 300,000 pictures are classified through a convolutional neural network, and the label results are obtained, including: speed limit signs, other prohibition signs, cancellation of prohibition signs, Instruction signs, hazard signs, and other signs have a total of 43 labels.

而测试过程则是用来测试所使用的神经网络用于交通标志分类的精度、速度是否可靠。其过程包括:读入视频图像、进行运动目标检测、对图像进行分块、分类器分类、得出检测结果。The test process is used to test whether the accuracy and speed of the neural network used for traffic sign classification are reliable. The process includes: reading in video images, detecting moving objects, dividing images into blocks, classifying them with a classifier, and obtaining detection results.

以上是对本发明的较佳实施进行了具体说明,但本发明创造并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可以作出种种的等同变换或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。The above is a specific description of the preferred implementation of the present invention, but the invention is not limited to the described embodiments, and those skilled in the art can also make various equivalent transformations or replacements without violating the spirit of the present invention. These equivalent modifications or replacements are all within the scope defined by the claims of the present application.

Claims (7)

Translated fromChinese
1.一种基于深度神经网络的交通标志分类方法,其特征在于:包括有以下步骤:1. a traffic sign classification method based on deep neural network, is characterized in that: comprise the following steps:A、基于光流方法的运动目标检测方法对读入的视频进行检测,当检测到有运动物体时,提取出感兴趣区域;A. The moving target detection method based on the optical flow method detects the read-in video, and when a moving object is detected, the region of interest is extracted;B、利用固定大小的块对提取出的感兴趣区域进行分块处理;B. The extracted region of interest is divided into blocks using blocks of a fixed size;C、对分块处理后的图片进行缩放处理,转换成同样大小的图片;C. Perform scaling processing on the image after block processing, and convert it into an image of the same size;D、将转换后的图片作为输入,利用卷积神经网络进行分类。D. Use the converted image as input and use a convolutional neural network for classification.2.根据权利要求1所述的一种基于深度神经网络的交通标志分类方法,其特征在于:所述步骤B具体为:2. a kind of traffic sign classification method based on deep neural network according to claim 1, is characterized in that: described step B is specifically:B1、利用固定大小的块对提取出的感兴趣区域进行分块得到分块图片;B1. Using blocks of a fixed size to block the extracted region of interest to obtain a block image;B2、利用固定大小的块移动一个像素后,对提取出的感兴趣区域进行分块得到分块图片;B2. After using a fixed-size block to move one pixel, the extracted region of interest is divided into blocks to obtain a block picture;B3、重复执行步骤B2得到多个分块图片。B3. Step B2 is repeatedly executed to obtain multiple block pictures.3.根据权利要求1或2所述的一种基于深度神经网络的交通标志分类方法,其特征在于:所述步骤B中固定大小的块的大小为N×N,N的取值为50-70。3. a kind of traffic sign classification method based on deep neural network according to claim 1 or 2, is characterized in that: the size of the block of fixed size is N * N in the described step B, and the value of N is 50- 70.4.根据权利要求1所述的一种基于深度神经网络的交通标志分类方法,其特征在于:所述步骤C中转换后图片的大小为32×32。4. A traffic sign classification method based on deep neural network according to claim 1, characterized in that: the size of the converted image in the step C is 32×32.5.根据权利要求1所述的一种基于深度神经网络的交通标志分类方法,其特征在于:所述步骤D中的卷积神经网络包括有7层,依次为第一卷积层、第一下采样层、第二卷积层、第二下采样层、第三卷积层、特征向量层和输出层。5. a kind of traffic sign classification method based on deep neural network according to claim 1, is characterized in that: the convolutional neural network in the described step D comprises 7 layers, is successively the first convolutional layer, the first Downsampling layer, second convolutional layer, second downsampling layer, third convolutional layer, feature vector layer, and output layer.6.根据权利要求5所述的一种基于深度神经网络的交通标志分类方法,其特征在于:所述第一卷积层包括有6个28×28大小的特征图,所述第一下采样层包括有6个14×14大小的特征图,所述第二卷积层包括有16个10×10大小的特征图,所述第二下采样层包括有16个5×5大小的特征图,所述第三卷积层包括有300个神经元。6. A kind of traffic sign classification method based on deep neural network according to claim 5, it is characterized in that: described first convolutional layer comprises 6 feature maps of 28 * 28 sizes, and described first downsampling layer includes 6 feature maps of size 14×14, the second convolutional layer includes 16 feature maps of size 10×10, and the second downsampling layer includes 16 feature maps of size 5×5 , the third convolutional layer includes 300 neurons.7.根据权利要求6所述的一种基于深度神经网络的交通标志分类方法,其特征在于:所述输出层包括有43个标签,所述第三卷积层的300个神经元与输出层的每一个标签全连接。7. a kind of traffic sign classification method based on deep neural network according to claim 6, is characterized in that: described output layer comprises 43 labels, and 300 neurons of described the 3rd convolutional layer and output layer Each label of is fully connected.
CN201410841539.XA2014-12-262014-12-26Traffic sign classification method based on deep neural networkPendingCN104517103A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410841539.XACN104517103A (en)2014-12-262014-12-26Traffic sign classification method based on deep neural network

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410841539.XACN104517103A (en)2014-12-262014-12-26Traffic sign classification method based on deep neural network

Publications (1)

Publication NumberPublication Date
CN104517103Atrue CN104517103A (en)2015-04-15

Family

ID=52792377

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410841539.XAPendingCN104517103A (en)2014-12-262014-12-26Traffic sign classification method based on deep neural network

Country Status (1)

CountryLink
CN (1)CN104517103A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104850845A (en)*2015-05-302015-08-19大连理工大学Traffic sign recognition method based on asymmetric convolution neural network
CN105488534A (en)*2015-12-042016-04-13中国科学院深圳先进技术研究院Method, device and system for deeply analyzing traffic scene
CN105551036A (en)*2015-12-102016-05-04中国科学院深圳先进技术研究院Training method and device for deep learning network
CN105550701A (en)*2015-12-092016-05-04福州华鹰重工机械有限公司Real-time image extraction and recognition method and device
CN105809138A (en)*2016-03-152016-07-27武汉大学Road warning mark detection and recognition method based on block recognition
CN105930830A (en)*2016-05-182016-09-07大连理工大学 A road traffic sign recognition method based on convolutional neural network
CN105956608A (en)*2016-04-212016-09-21恩泊泰(天津)科技有限公司Objective positioning and classifying algorithm based on deep learning
CN106372571A (en)*2016-08-182017-02-01宁波傲视智绘光电科技有限公司Road traffic sign detection and identification method
CN106682696A (en)*2016-12-292017-05-17华中科技大学Multi-example detection network based on refining of online example classifier and training method thereof
CN106844524A (en)*2016-12-292017-06-13北京工业大学A kind of medical image search method converted based on deep learning and Radon
CN107016521A (en)*2017-04-262017-08-04国家电网公司A kind of warehouse nameplate recognition methods based on image convolution nerual network technique
CN107085733A (en)*2017-05-152017-08-22山东工商学院 Nearshore Infrared Ship Recognition Method Based on CNN Deep Learning
CN107220643A (en)*2017-04-122017-09-29广东工业大学The Traffic Sign Recognition System of deep learning model based on neurological network
CN107437110A (en)*2017-07-112017-12-05中国科学院自动化研究所The piecemeal convolution optimization method and device of convolutional neural networks
CN107742121A (en)*2017-10-232018-02-27国网江苏省电力公司南通供电公司 A warehouse sign recognition method based on image convolutional neural network technology
CN107784315A (en)*2016-08-262018-03-09深圳光启合众科技有限公司The recognition methods of destination object and device, and robot
CN108154102A (en)*2017-12-212018-06-12安徽师范大学A kind of traffic sign recognition method
CN108268936A (en)*2018-01-172018-07-10百度在线网络技术(北京)有限公司For storing the method and apparatus of convolutional neural networks
CN108475331A (en)*2016-02-172018-08-31英特尔公司Use the candidate region for the image-region for including interested object of multiple layers of the characteristic spectrum from convolutional neural networks model
CN109086753A (en)*2018-10-082018-12-25新疆大学Traffic sign recognition method, device based on binary channels convolutional neural networks
CN109146074A (en)*2017-06-282019-01-04埃森哲环球解决方案有限公司Image object identification
CN109271934A (en)*2018-06-192019-01-25Kpit技术有限责任公司System and method for Traffic Sign Recognition
CN109492454A (en)*2017-09-112019-03-19比亚迪股份有限公司Object identifying method and device
US10262218B2 (en)2017-01-032019-04-16Qualcomm IncorporatedSimultaneous object detection and rigid transform estimation using neural network
CN109766864A (en)*2019-01-212019-05-17开易(北京)科技有限公司Image detecting method, image detection device and computer readable storage medium
CN109815906A (en)*2019-01-252019-05-28华中科技大学 Method and system for traffic sign detection based on step-by-step deep learning
CN105930830B (en)*2016-05-182019-07-16大连理工大学 A road traffic sign recognition method based on convolutional neural network
CN110019896A (en)*2017-07-282019-07-16杭州海康威视数字技术股份有限公司A kind of image search method, device and electronic equipment
CN110135307A (en)*2019-04-302019-08-16北京邮电大学 Traffic sign detection method and device based on attention mechanism
WO2020216227A1 (en)*2019-04-242020-10-29华为技术有限公司Image classification method and apparatus, and data processing method and apparatus
CN112347972A (en)*2020-11-182021-02-09合肥湛达智能科技有限公司High-dynamic region-of-interest image processing method based on deep learning
CN112784084A (en)*2019-11-082021-05-11阿里巴巴集团控股有限公司Image processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP2026313A1 (en)*2007-08-172009-02-18MAGNETI MARELLI SISTEMI ELETTRONICI S.p.A.A method and a system for the recognition of traffic signs with supplementary panels
CN102024152A (en)*2010-12-142011-04-20浙江大学Method for recognizing traffic sings based on sparse expression and dictionary study
CN102881160A (en)*2012-07-182013-01-16广东工业大学Outdoor traffic sign identification method under low-illumination scene
CN103544484A (en)*2013-10-302014-01-29广东工业大学Traffic sign identification method and system based on SURF
CN104244113A (en)*2014-10-082014-12-24中国科学院自动化研究所Method for generating video abstract on basis of deep learning technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP2026313A1 (en)*2007-08-172009-02-18MAGNETI MARELLI SISTEMI ELETTRONICI S.p.A.A method and a system for the recognition of traffic signs with supplementary panels
CN102024152A (en)*2010-12-142011-04-20浙江大学Method for recognizing traffic sings based on sparse expression and dictionary study
CN102881160A (en)*2012-07-182013-01-16广东工业大学Outdoor traffic sign identification method under low-illumination scene
CN103544484A (en)*2013-10-302014-01-29广东工业大学Traffic sign identification method and system based on SURF
CN104244113A (en)*2014-10-082014-12-24中国科学院自动化研究所Method for generating video abstract on basis of deep learning technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨斐: "交通标志识别方法设计", 《微计算机信息》*

Cited By (45)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104850845A (en)*2015-05-302015-08-19大连理工大学Traffic sign recognition method based on asymmetric convolution neural network
CN104850845B (en)*2015-05-302017-12-26大连理工大学A kind of traffic sign recognition method based on asymmetric convolutional neural networks
CN105488534A (en)*2015-12-042016-04-13中国科学院深圳先进技术研究院Method, device and system for deeply analyzing traffic scene
CN105550701A (en)*2015-12-092016-05-04福州华鹰重工机械有限公司Real-time image extraction and recognition method and device
CN105550701B (en)*2015-12-092018-11-06福州华鹰重工机械有限公司Realtime graphic extracts recognition methods and device
CN105551036A (en)*2015-12-102016-05-04中国科学院深圳先进技术研究院Training method and device for deep learning network
CN108475331B (en)*2016-02-172022-04-05英特尔公司 Method, apparatus, system and computer readable medium for object detection
CN108475331A (en)*2016-02-172018-08-31英特尔公司Use the candidate region for the image-region for including interested object of multiple layers of the characteristic spectrum from convolutional neural networks model
US11244191B2 (en)2016-02-172022-02-08Intel CorporationRegion proposal for image regions that include objects of interest using feature maps from multiple layers of a convolutional neural network model
CN105809138A (en)*2016-03-152016-07-27武汉大学Road warning mark detection and recognition method based on block recognition
CN105956608A (en)*2016-04-212016-09-21恩泊泰(天津)科技有限公司Objective positioning and classifying algorithm based on deep learning
CN105930830A (en)*2016-05-182016-09-07大连理工大学 A road traffic sign recognition method based on convolutional neural network
CN105930830B (en)*2016-05-182019-07-16大连理工大学 A road traffic sign recognition method based on convolutional neural network
CN106372571A (en)*2016-08-182017-02-01宁波傲视智绘光电科技有限公司Road traffic sign detection and identification method
CN107784315A (en)*2016-08-262018-03-09深圳光启合众科技有限公司The recognition methods of destination object and device, and robot
CN106844524A (en)*2016-12-292017-06-13北京工业大学A kind of medical image search method converted based on deep learning and Radon
CN106682696B (en)*2016-12-292019-10-08华中科技大学The more example detection networks and its training method refined based on online example classification device
CN106844524B (en)*2016-12-292019-08-09北京工业大学 A Medical Image Retrieval Method Based on Deep Learning and Radon Transform
CN106682696A (en)*2016-12-292017-05-17华中科技大学Multi-example detection network based on refining of online example classifier and training method thereof
US10262218B2 (en)2017-01-032019-04-16Qualcomm IncorporatedSimultaneous object detection and rigid transform estimation using neural network
CN107220643A (en)*2017-04-122017-09-29广东工业大学The Traffic Sign Recognition System of deep learning model based on neurological network
CN107016521A (en)*2017-04-262017-08-04国家电网公司A kind of warehouse nameplate recognition methods based on image convolution nerual network technique
CN107085733A (en)*2017-05-152017-08-22山东工商学院 Nearshore Infrared Ship Recognition Method Based on CNN Deep Learning
CN109146074A (en)*2017-06-282019-01-04埃森哲环球解决方案有限公司Image object identification
CN107437110B (en)*2017-07-112021-04-02中国科学院自动化研究所 Block convolution optimization method and device for convolutional neural network
CN107437110A (en)*2017-07-112017-12-05中国科学院自动化研究所The piecemeal convolution optimization method and device of convolutional neural networks
CN110019896A (en)*2017-07-282019-07-16杭州海康威视数字技术股份有限公司A kind of image search method, device and electronic equipment
US11586664B2 (en)2017-07-282023-02-21Hangzhou Hikvision Digital Technology Co., Ltd.Image retrieval method and apparatus, and electronic device
CN109492454A (en)*2017-09-112019-03-19比亚迪股份有限公司Object identifying method and device
CN107742121A (en)*2017-10-232018-02-27国网江苏省电力公司南通供电公司 A warehouse sign recognition method based on image convolutional neural network technology
CN108154102A (en)*2017-12-212018-06-12安徽师范大学A kind of traffic sign recognition method
CN108268936A (en)*2018-01-172018-07-10百度在线网络技术(北京)有限公司For storing the method and apparatus of convolutional neural networks
CN108268936B (en)*2018-01-172022-10-28百度在线网络技术(北京)有限公司Method and apparatus for storing convolutional neural networks
CN109271934B (en)*2018-06-192023-05-02Kpit技术有限责任公司System and method for traffic sign recognition
CN109271934A (en)*2018-06-192019-01-25Kpit技术有限责任公司System and method for Traffic Sign Recognition
CN109086753A (en)*2018-10-082018-12-25新疆大学Traffic sign recognition method, device based on binary channels convolutional neural networks
CN109086753B (en)*2018-10-082022-05-10新疆大学 Traffic sign recognition method and device based on two-channel convolutional neural network
CN109766864A (en)*2019-01-212019-05-17开易(北京)科技有限公司Image detecting method, image detection device and computer readable storage medium
CN109815906B (en)*2019-01-252021-04-06华中科技大学 Method and system for traffic sign detection based on step-by-step deep learning
CN109815906A (en)*2019-01-252019-05-28华中科技大学 Method and system for traffic sign detection based on step-by-step deep learning
WO2020216227A1 (en)*2019-04-242020-10-29华为技术有限公司Image classification method and apparatus, and data processing method and apparatus
CN110135307A (en)*2019-04-302019-08-16北京邮电大学 Traffic sign detection method and device based on attention mechanism
CN112784084A (en)*2019-11-082021-05-11阿里巴巴集团控股有限公司Image processing method and device and electronic equipment
CN112784084B (en)*2019-11-082024-01-26阿里巴巴集团控股有限公司Image processing method and device and electronic equipment
CN112347972A (en)*2020-11-182021-02-09合肥湛达智能科技有限公司High-dynamic region-of-interest image processing method based on deep learning

Similar Documents

PublicationPublication DateTitle
CN104517103A (en)Traffic sign classification method based on deep neural network
Tan et al.YOLOv4_Drone: UAV image target detection based on an improved YOLOv4 algorithm
CN110188705B (en)Remote traffic sign detection and identification method suitable for vehicle-mounted system
CN117037119B (en) Road target detection method and system based on improved YOLOv8
CN111598030B (en) A method and system for vehicle detection and segmentation in aerial images
CN111191583B (en)Space target recognition system and method based on convolutional neural network
CN108564097B (en)Multi-scale target detection method based on deep convolutional neural network
CN116188999B (en)Small target detection method based on visible light and infrared image data fusion
CN111882620B (en)Road drivable area segmentation method based on multi-scale information
CN112132156A (en)Multi-depth feature fusion image saliency target detection method and system
CN108171112A (en)Vehicle identification and tracking based on convolutional neural networks
CN110069986A (en)A kind of traffic lights recognition methods and system based on mixed model
CN104463241A (en)Vehicle type recognition method in intelligent transportation monitoring system
CN116630702A (en)Pavement adhesion coefficient prediction method based on semantic segmentation network
CN114913498A (en)Parallel multi-scale feature aggregation lane line detection method based on key point estimation
Lee et al.An intelligent driving assistance system based on lightweight deep learning models
CN115909276A (en)Improved YOLOv 5-based small traffic sign target detection method in complex weather
CN118230286A (en) A vehicle and pedestrian recognition method based on improved YOLOv7
Cho et al.Modified perceptual cycle generative adversarial network-based image enhancement for improving accuracy of low light image segmentation
CN119152200B (en) An improved autonomous driving target detection method based on YOLOv8
Ren et al.Enhancing Road Scene Segmentation with an Optimized DeepLabV3+
CN114882205A (en)Target detection method based on attention mechanism
Ogura et al.Improving the visibility of nighttime images for pedestrian recognition using in‐vehicle camera
CN117315614B (en)Traffic target detection method based on improved YOLOv7
CN118570779A (en)Self-supervised learning driver distraction behavior detection method

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20150415

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp