Movatterモバイル変換


[0]ホーム

URL:


CN110837838A - End-to-end frame number identification system and method based on deep learning - Google Patents

End-to-end frame number identification system and method based on deep learning
Download PDF

Info

Publication number
CN110837838A
CN110837838ACN201911075932.1ACN201911075932ACN110837838ACN 110837838 ACN110837838 ACN 110837838ACN 201911075932 ACN201911075932 ACN 201911075932ACN 110837838 ACN110837838 ACN 110837838A
Authority
CN
China
Prior art keywords
character
frame number
recognition
image
end frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911075932.1A
Other languages
Chinese (zh)
Other versions
CN110837838B (en
Inventor
张发恩
范峻铭
黄家水
唐永亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innovation Qizhi (chongqing) Technology Co Ltd
Original Assignee
Innovation Qizhi (chongqing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innovation Qizhi (chongqing) Technology Co LtdfiledCriticalInnovation Qizhi (chongqing) Technology Co Ltd
Priority to CN201911075932.1ApriorityCriticalpatent/CN110837838B/en
Publication of CN110837838ApublicationCriticalpatent/CN110837838A/en
Application grantedgrantedCritical
Publication of CN110837838BpublicationCriticalpatent/CN110837838B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses an end-to-end frame number identification system based on deep learning, which comprises the following components: the image input module is used for inputting an image containing the whole frame number character string; the image characteristic extraction module is connected with the image input module and used for extracting the image characteristics corresponding to the image to obtain a characteristic diagram corresponding to the image and converting the characteristic diagram into a corresponding characteristic vector; the character recognition module is connected with the image characteristic extraction module and used for carrying out corresponding character type recognition on each frame number character in the frame number character string in the image according to the characteristic vector and finally recognizing to obtain a character recognition result of the frame number character string.

Description

End-to-end frame number identification system and method based on deep learning
Technical Field
The invention relates to an automatic frame number identification system, in particular to an end-to-end frame number identification system and an end-to-end frame number identification method based on deep learning.
Background
The frame number is the unique identification code of the vehicle. The frame number is typically formed by a combination of 17 digits and letters. At present, when each vehicle dealer counts vehicle inventory, the vehicle inventory is generally counted by manually identifying and recording the frame number, but the manual identification of the frame number has extremely low working efficiency, so a system capable of automatically identifying and extracting the frame number is needed to solve the problems.
Disclosure of Invention
The invention aims to provide an end-to-end frame number identification system based on deep learning to solve the technical problem.
In order to achieve the purpose, the invention adopts the following technical scheme:
the end-to-end frame number identification system based on deep learning is provided and used for automatically identifying the frame number, and comprises the following components:
the image input module is used for inputting an image containing the whole frame number character string;
the image characteristic extraction module is connected with the image input module and used for extracting the image characteristics corresponding to the image to obtain a characteristic diagram corresponding to the image and converting the characteristic diagram into a corresponding characteristic vector;
and the character recognition module is connected with the image feature extraction module and used for carrying out corresponding character type recognition on each frame number character in the frame number character string in the image according to the feature vector and based on a preset character recognition model, and finally recognizing to obtain a character recognition result of the frame number character string.
As a preferable scheme of the present invention, the end-to-end vehicle frame number recognition system performs convolution recognition on the image through a convolution neural network to obtain the feature map corresponding to the image.
As a preferable aspect of the present invention, the end-to-end frame number recognition system converts the feature map corresponding to the image into the feature vector through the convolutional neural network.
As a preferable scheme of the present invention, the character recognition module includes a plurality of character recognition units, each of the character recognition units is respectively used for recognizing a character type corresponding to the frame number character at one of the designated character positions in the frame number character string,
each character recognition unit specifically comprises:
the character feature positioning subunit is used for positioning the component features corresponding to the frame number characters on the appointed character positions in the feature vectors and obtaining a positioning result;
the prediction vector generation subunit is connected with the appointed character bit character feature positioning subunit and used for converting the feature vector into a corresponding prediction vector according to the positioning result;
the prediction probability calculating subunit is connected with the prediction vector generating subunit and is used for calculating component values corresponding to all the components in the prediction vector based on the character recognition model;
and the character type identification subunit is connected with the prediction probability calculation subunit and is used for identifying the character type corresponding to the component corresponding to the maximum component value in the prediction vector based on the character identification model, taking the identified character type as the character type corresponding to the frame number character on the specified character position, and outputting the character type identification result of the frame number character on the specified character position.
As a preferable aspect of the present invention, the number of the character recognition units is 17, and each of the character recognition units is respectively configured to recognize the character type corresponding to the frame number character at one of the designated character positions in the frame number character string.
As a preferred scheme of the present invention, the end-to-end vehicle frame number recognition system further includes a character recognition model training module, connected to the character recognition module, for training and forming the character recognition model according to the character recognition result.
The invention also provides an end-to-end frame number identification method based on deep learning, which is realized by applying the end-to-end frame number identification system and comprises the following steps:
step S1, inputting an image containing a whole frame number character string by the end-to-end frame number identification system;
step S2, the end-to-end frame number recognition system extracts the image characteristics corresponding to the image and obtains a characteristic diagram corresponding to the image;
step S3, the end-to-end frame number recognition system converts the characteristic diagram into a corresponding characteristic vector;
and step S4, the end-to-end frame number recognition system simultaneously performs corresponding character type recognition on each character in the frame number character string in the image according to the feature vector and based on a preset character recognition model, and finally obtains a character recognition result of the frame number character string through recognition.
As a preferable scheme of the present invention, in step S4, the process of identifying the character type corresponding to each character in the frame number character string by the end-to-end frame number identification system specifically includes the following steps:
step S41, the end-to-end frame number recognition system locates the components corresponding to the frame number characters on each designated character position in the frame number character string in the feature vector based on the preset character recognition model, and obtains a plurality of locating results of the frame number characters related to each designated character position;
step S42, the end-to-end frame number identification system converts the same feature vector into a plurality of corresponding prediction vectors according to each positioning result;
step S43, the end-to-end frame number identification system calculates component values corresponding to the components in the prediction vectors based on the preset character identification model;
step S44, the end-to-end frame number recognition system recognizes, based on the preset character recognition model, a character type corresponding to the component corresponding to the maximum component value in each of the prediction vectors, uses the recognized character type as a character type corresponding to the frame number character on the corresponding designated character position, and finally obtains a character recognition result for the frame number character string by recognition.
The end-to-end frame number recognition system provided by the invention can automatically recognize the frame number characters of the input image containing the frame number character string, has quick and efficient recognition process and high recognition accuracy, and solves the technical problems of low recognition efficiency, easy omission and wrong detection of the traditional manual recognition mode.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic structural diagram of an end-to-end frame number identification system based on deep learning provided by the invention;
FIG. 2 is a schematic structural diagram of a character recognition module in the deep learning-based end-to-end frame number recognition system provided by the invention;
FIG. 3 is a schematic structural diagram of a character recognition unit in a character recognition module in the deep learning-based end-to-end frame number recognition system provided by the invention;
FIG. 4 is a diagram of the method steps for implementing end-to-end identification of frame numbers using the end-to-end frame number identification system provided by the present invention;
FIG. 5 is a diagram of the steps of a preferred recognition method for character type recognition of the carriage number characters by the end-to-end carriage number recognition system provided by the present invention;
FIG. 6 is a network architecture diagram of a convolutional neural network used by the end-to-end frame number recognition system provided by the present invention to extract the feature map of the image containing the frame number character string;
FIG. 7 is a network architecture diagram of a convolutional neural network employed by the end-to-end frame number recognition system provided by the present invention to recognize the character type corresponding to the character on the designated character position in the frame number character string;
fig. 8 is a diagram of the recognition result of the end-to-end frame number recognition system provided by the present invention for recognizing the frame number character string.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Referring to fig. 1, an end-to-end frame number recognition system based on deep learning according to an embodiment of the present invention is used for automatically recognizing a frame number, and the frame number recognition system includes:
theimage input module 1 is used for inputting an image containing a whole vehicle frame number character string;
the imagecharacteristic extraction module 2 is connected with theimage input module 1 and is used for extracting image characteristics corresponding to the image to obtain a characteristic diagram corresponding to the image and converting the characteristic diagram into a corresponding characteristic vector;
and thecharacter recognition module 3 is connected with the imagefeature extraction module 2 and is used for carrying out corresponding character type recognition on each frame number character in the frame number character string in the image according to the feature vector and based on a preset character recognition model, and finally recognizing to obtain a character recognition result of the frame number character string.
In the technical scheme, the end-to-end frame number recognition system performs convolution recognition on the image through the convolution neural network to obtain the characteristic diagram corresponding to the input image. Referring specifically to fig. 6, the convolutional neural network preferably uses the VGGNet or ResNet network architecture existing in the prior art to extract the image features. The network architecture includes a convolutional layer, a ReLu layer, and a batch normalization layer. The input to the convolutional neural network is an image containing the entire string of vehicle frame numbers, the image size being 3 x 448. The input image is first calculated by a convolution layer with convolution kernel size of 3 × 3, feature maps of 64 × 448 are output, and then feature maps with sizes of 64 × 224, 128 × 112, 256 × 64, and 512 × 32 are output sequentially by further four stages of image convolution feature extraction. And finally, compressing the feature map with the size of 512 × 32 into a 1024-dimensional feature vector, wherein the feature vector encodes the position and shape features of the frame number character string in the input image, and then inputting the feature vector into a subsequent character recognition network to perform a character recognition process of the frame number character string.
It should be noted that the convolutional neural network image feature extraction method adopted by the end-to-end frame number recognition system is an image feature extraction method existing in the prior art, and the image feature extraction method is not within the scope of the claimed invention, so the specific process of extracting the feature map of the input image by the end-to-end frame number recognition system is not described herein.
In the technical scheme, the end-to-end vehicle frame number identification system also converts the characteristic diagram corresponding to the image into the characteristic vector through the convolutional neural network. The method of converting the feature map into the feature vector using the convolutional neural network is a method existing in the prior art, and the method is not within the scope of the claimed invention, so the detailed conversion process thereof is not described herein.
Referring to fig. 2, thecharacter recognition module 3 includes a plurality ofcharacter recognition units 31, eachcharacter recognition unit 31 is respectively used for recognizing a character type corresponding to a frame number character at a designated character position in the frame number character string,
referring to fig. 3, eachcharacter recognition unit 31 specifically includes:
the characterfeature positioning subunit 311 is configured to position, in the feature vector, a component feature corresponding to a frame number character associated with the designated character position, and obtain a positioning result;
a predictionvector generating subunit 312, connected to the characterfeature positioning subunit 311, for converting the feature vector into a corresponding prediction vector according to the positioning result;
a predictionprobability calculation unit 313 connected to the predictionvector generation subunit 312, configured to calculate, based on the character recognition model, prediction probabilities corresponding to the components in the prediction vector;
and the charactertype identification subunit 314 and the predictionprobability calculation subunit 313 are configured to identify, based on the character identification model, a character type corresponding to the component corresponding to the largest component value in the prediction vector, use the identified character type as the character type corresponding to the car frame number character on the specified character position, and output a character type identification result for the car frame number character on the specified character position.
It is emphasized here that one character recognition unit recognizes the frame number character only for recognizing one of the designated character positions in the frame number character string. For example, the first character recognition unit recognizes a frame number character on the first character position in the frame number character string, and the second character recognition unit recognizes a frame number character … … on the second character position in the frame number character string.
Since the frame number is generally composed of 17-digit letters or numbers or a combination of letters and numbers, it is preferable that the number of thecharacter recognition units 31 is 17, and eachcharacter recognition unit 31 is used for recognizing the character type corresponding to the frame number character at a designated character position in the frame number character string.
The characters include 36 character types, which are 26 english alphabets and 10 natural numbers from 1 to 10, respectively.
In the above technical solution, the process of recognizing the character type of the frame number character by the end-to-end frame number recognition system is detailed as follows:
fig. 7 shows a network architecture diagram of a convolutional neural network adopted by the end-to-end frame number recognition system provided by the present invention to recognize the character type corresponding to the frame number character on the designated character position in the frame number character string, please refer to fig. 7, the network architecture is composed of a first full connection layer, a second full connection layer and a ReLu layer,
the 1024-dimensional feature vector output by the system is subjected to feature extraction of the first fully connectedlayer 100, the second fully connectedlayer 200 and theReLu layer 300, and then a 36-dimensional prediction vector is output. The 36 components in the 36-dimensional feature vector are used to represent 26 english letters and ten natural numbers from 1 to 10, respectively. The component values corresponding to the 36 components are used for representing the prediction probability that the component is the corresponding English letter or natural number.
Specifically, the 1024-dimensional feature vector output by the system is simultaneously used as the input of 17 character recognition units, and then each character recognition unit respectively extracts the component features of the frame number characters on the appointed character position on the concerned frame number character string so as to extract and output the prediction vector corresponding to the frame number character on the appointed character position. The prediction vector is the 36-dimensional feature vector, then component values corresponding to components in the 36-dimensional feature vector are calculated according to a preset character recognition model (that is, the component is calculated to be the prediction probability of corresponding english letters or numbers), then the maximum component value is taken as a prediction result to output the character type corresponding to the component, for example, the character type is a character "a", and finally, the frame number character on the designated character position on the frame number character string concerned by the character recognition unit is output to be the character "a".
It should be noted that each character recognition unit may locate, according to a preset character recognition model, component features of a frame number character to be focused on and associated with a designated character position in a 1024-dimensional feature vector, and ignore other feature parts in the 1024-dimensional feature vector.
Preferably, the end-to-end frame number recognition system provided by the invention further comprises a character recognitionmodel training module 4 connected with thecharacter recognition module 3 and used for training and forming the character recognition model according to the character recognition result.
The loss function used by the training character recognition model is calculated by the following formula:
Figure BDA0002262439950000061
in the above formula, L is used to represent a loss function;
m is used to represent the character category to which each frame number character may belong (i.e., 26 english letters and 10 natural numbers from 1 to 10);
ycfor representing an indicator variable representing a character class predicted by the systemWhether the character type is consistent with the real character type or not, if so, ycIs 1, otherwise ycIs 0;
pcused to represent the prediction probability of the training sample being the character class c.
The character recognition model is optimized by an Adam optimization method in the prior art.
The invention also provides an end-to-end frame number identification method based on deep learning, which is realized by applying the end-to-end frame number identification system, and please refer to fig. 4 and 8, and comprises the following steps:
step S1, inputting an image containing a whole frame number character string by the end-to-end frame number identification system;
step S2, the end-to-end frame number recognition system extracts the image characteristics corresponding to the image and obtains a characteristic diagram corresponding to the image;
step S3, the end-to-end frame number recognition system converts the characteristic diagram into a corresponding characteristic vector;
and step S4, the end-to-end frame number recognition system simultaneously carries out corresponding character type recognition on each character in the frame number character string in the image according to the characteristic vector and based on a preset character recognition model, and finally obtains a character recognition result of the frame number character string through recognition.
Referring to fig. 5, in step S4, the process of the end-to-end frame number recognition system recognizing the character type corresponding to each character in the frame number character string specifically includes the following steps:
step S41, the end-to-end frame number recognition system locates the components corresponding to the frame number characters on each appointed character position in the frame number character string in the feature vector based on the preset character recognition model, and obtains a plurality of locating results of the frame number characters related to each appointed character position;
step S42, the end-to-end frame number recognition system converts the same feature vector into a plurality of corresponding prediction vectors according to each positioning result;
step S43, the end-to-end frame number recognition system calculates component values corresponding to components in the prediction vectors based on a preset character recognition model;
and step S44, the end-to-end frame number recognition system recognizes the character type corresponding to the component corresponding to the maximum component value in each prediction vector based on a preset character recognition model, uses the recognized character type as the character type corresponding to the frame number character on the corresponding designated character position in the frame number character string, and finally recognizes to obtain the character recognition result of the frame number character string.
In the above technical solution, in step S41, the positioning method adopted by the system to position the component corresponding to the frame number character on the designated character position in the feature vector is obtained by recognition of a pre-trained character recognition model, the recognition positioning method is a positioning method existing in the prior art, and the positioning method is not within the scope of the present invention, so the specific method process of the system to position the component based on the convolutional neural network is not described here.
In step S42, the prediction vector is a 36-dimensional feature vector, 36 components in the 36-dimensional feature vector are used to indicate that the frame number character on the designated character position may be the corresponding 26 english alphabets or 10 natural numbers from 1 to 10 (i.e., the possible character categories of the frame number character), and the component value corresponding to each of the 36 components is the prediction probability that the component is the corresponding character category.
In step S42, the method for the system to locate the position of the character feature corresponding to the frame number character of the designated character position in the 1024-dimensional feature vector is the conventional locating method, and the locating method is not within the scope of the claimed invention, and therefore, will not be described herein.
In step S43, the method for calculating the component values corresponding to the components in the 36-dimensional feature vector by the system is also the method existing in the prior art, and the above convolutional neural network is preferably used for calculation, and the specific calculation process is not described herein.
It should be emphasized that each character recognition unit only recognizes the frame number character type of the designated character position in the frame number character string, for example, the first character recognition unit only recognizes the character type corresponding to the frame number character at the first designated character position in the frame number character string, and the second character recognition unit only recognizes the character type … … corresponding to the frame number character at the second designated character position in the frame number character string, so that the characters in the frame number are arranged in sequence in the recognition result of the system performing character recognition on the frame number character string, and the disorder situation does not occur.
In conclusion, the end-to-end frame number recognition system provided by the invention can automatically recognize the frame number characters of the input image containing the frame number character string, has a quick and efficient recognition process and high recognition accuracy, and solves the technical problems of low recognition efficiency, easy omission and wrong detection of the traditional manual recognition mode.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.

Claims (8)

1. An end-to-end frame number recognition system based on deep learning is used for carrying out automatic recognition on frame numbers and is characterized by comprising the following components:
the image input module is used for inputting an image containing the whole frame number character string;
the image characteristic extraction module is connected with the image input module and used for extracting the image characteristics corresponding to the image to obtain a characteristic diagram corresponding to the image and converting the characteristic diagram into a corresponding characteristic vector;
and the character recognition module is connected with the image feature extraction module and used for carrying out corresponding character type recognition on each frame number character in the frame number character string in the image according to the feature vector and based on a preset character recognition model, and finally recognizing to obtain a character recognition result of the frame number character string.
2. The end-to-end frame number identification system of claim 1, wherein the end-to-end frame number identification system performs convolutional identification on the image through a convolutional neural network to obtain the feature map corresponding to the image.
3. The end-to-end frame number identification system of claim 2, wherein the end-to-end frame number identification system converts the feature map corresponding to the image into the feature vector through the convolutional neural network.
4. The end-to-end frame number recognition system of claim 1, wherein the character recognition module comprises a plurality of character recognition units, each of the character recognition units is used for recognizing the character type corresponding to the frame number character at a designated character position in the frame number character string,
each character recognition unit specifically comprises:
the character feature positioning subunit is used for positioning the component features corresponding to the frame number characters on the appointed character positions in the feature vectors and obtaining a positioning result;
the prediction vector generation subunit is connected with the appointed character bit character feature positioning subunit and used for converting the feature vector into a corresponding prediction vector according to the positioning result;
the prediction probability calculating subunit is connected with the prediction vector generating subunit and is used for calculating component values corresponding to all the components in the prediction vector based on the character recognition model;
and the character type identification subunit is connected with the prediction probability calculation subunit and is used for identifying the character type corresponding to the component corresponding to the maximum component value in the prediction vector based on the character identification model, taking the identified character type as the character type corresponding to the frame number character on the specified character position, and outputting the character type identification result of the frame number character on the specified character position.
5. The end-to-end frame number identification system of claim 4, wherein the number of said character recognition units is 17, each said character recognition unit is used for recognizing the character type corresponding to the frame number character at one of the designated character positions in the 17-bit frame number character string.
6. The end-to-end frame number recognition system of claim 1, further comprising a character recognition model training module coupled to the character recognition module for training the character recognition model based on the character recognition results.
7. An end-to-end frame number identification method based on deep learning is realized by applying the end-to-end frame number identification system as any one of claims 1 to 6, and is characterized by comprising the following steps of:
step S1, inputting an image containing a whole frame number character string by the end-to-end frame number identification system;
step S2, the end-to-end frame number recognition system extracts the image characteristics corresponding to the image and obtains a characteristic diagram corresponding to the image;
step S3, the end-to-end frame number recognition system converts the characteristic diagram into a corresponding characteristic vector;
and step S4, the end-to-end frame number recognition system simultaneously performs corresponding character type recognition on each character in the frame number character string in the image according to the feature vector and based on a preset character recognition model, and finally obtains a character recognition result of the frame number character string through recognition.
8. The end-to-end frame number identification method according to claim 7, wherein in step S4, the process of the end-to-end frame number identification system identifying the character type corresponding to each of the characters in the frame number character string specifically includes the following steps:
step S41, the end-to-end frame number recognition system locates the components corresponding to the frame number characters on each designated character position in the frame number character string in the feature vector based on the preset character recognition model, and obtains a plurality of locating results of the frame number characters related to each designated character position;
step S42, the end-to-end frame number identification system converts the same feature vector into a plurality of corresponding prediction vectors according to each positioning result;
step S43, the end-to-end frame number identification system calculates component values corresponding to the components in the prediction vectors based on the preset character identification model;
step S44, the end-to-end frame number recognition system recognizes, based on the preset character recognition model, a character type corresponding to the component corresponding to the maximum component value in each of the prediction vectors, uses the recognized character type as a character type corresponding to the frame number character on the corresponding designated character position, and finally obtains a character recognition result for the frame number character string by recognition.
CN201911075932.1A2019-11-062019-11-06End-to-end vehicle frame number identification system and identification method based on deep learningActiveCN110837838B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911075932.1ACN110837838B (en)2019-11-062019-11-06End-to-end vehicle frame number identification system and identification method based on deep learning

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911075932.1ACN110837838B (en)2019-11-062019-11-06End-to-end vehicle frame number identification system and identification method based on deep learning

Publications (2)

Publication NumberPublication Date
CN110837838Atrue CN110837838A (en)2020-02-25
CN110837838B CN110837838B (en)2023-07-11

Family

ID=69576170

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911075932.1AActiveCN110837838B (en)2019-11-062019-11-06End-to-end vehicle frame number identification system and identification method based on deep learning

Country Status (1)

CountryLink
CN (1)CN110837838B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112215221A (en)*2020-09-222021-01-12国交空间信息技术(北京)有限公司 A method for automatic identification of frame number
CN114170431A (en)*2021-11-052022-03-11多伦科技股份有限公司Complex scene vehicle frame number identification method and device based on edge features

Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20090125510A1 (en)*2006-07-312009-05-14Jamey GrahamDynamic presentation of targeted information in a mixed media reality recognition system
CN102509112A (en)*2011-11-022012-06-20珠海逸迩科技有限公司Number plate identification method and identification system thereof
US20140270385A1 (en)*2013-03-152014-09-18Mitek Systems, IncMethods for mobile image capture of vehicle identification numbers in a non-document
CN105760891A (en)*2016-03-022016-07-13上海源庐加佳信息科技有限公司Chinese character verification code recognition method
CN105894045A (en)*2016-05-062016-08-24电子科技大学Vehicle type recognition method with deep network model based on spatial pyramid pooling
CN107423732A (en)*2017-07-262017-12-01大连交通大学Vehicle VIN recognition methods based on Android platform
US10163022B1 (en)*2017-06-222018-12-25StradVision, Inc.Method for learning text recognition, method for recognizing text using the same, and apparatus for learning text recognition, apparatus for recognizing text using the same
US20190057275A1 (en)*2017-08-212019-02-21Sap SeAutomatic identification of cloned vehicle identifiers
CN109460765A (en)*2018-09-252019-03-12平安科技(深圳)有限公司Driving license is taken pictures recognition methods, device and the electronic equipment of image in natural scene
US20190095730A1 (en)*2017-09-252019-03-28Beijing University Of Posts And TelecommunicationsEnd-To-End Lightweight Method And Apparatus For License Plate Recognition
CN109726715A (en)*2018-12-272019-05-07信雅达系统工程股份有限公司A kind of character image serializing identification, structural data output method
CN109829453A (en)*2018-12-292019-05-31天津车之家数据信息技术有限公司It is a kind of to block the recognition methods of text in card, device and calculate equipment
CN109840524A (en)*2019-01-042019-06-04平安科技(深圳)有限公司Kind identification method, device, equipment and the storage medium of text
WO2019177734A1 (en)*2018-03-132019-09-19Recogni Inc.Systems and methods for inter-camera recognition of individuals and their properties
CN110378331A (en)*2019-06-102019-10-25南京邮电大学A kind of end-to-end Vehicle License Plate Recognition System and its method based on deep learning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20090125510A1 (en)*2006-07-312009-05-14Jamey GrahamDynamic presentation of targeted information in a mixed media reality recognition system
CN102509112A (en)*2011-11-022012-06-20珠海逸迩科技有限公司Number plate identification method and identification system thereof
US20140270385A1 (en)*2013-03-152014-09-18Mitek Systems, IncMethods for mobile image capture of vehicle identification numbers in a non-document
CN105760891A (en)*2016-03-022016-07-13上海源庐加佳信息科技有限公司Chinese character verification code recognition method
CN105894045A (en)*2016-05-062016-08-24电子科技大学Vehicle type recognition method with deep network model based on spatial pyramid pooling
US10163022B1 (en)*2017-06-222018-12-25StradVision, Inc.Method for learning text recognition, method for recognizing text using the same, and apparatus for learning text recognition, apparatus for recognizing text using the same
CN107423732A (en)*2017-07-262017-12-01大连交通大学Vehicle VIN recognition methods based on Android platform
US20190057275A1 (en)*2017-08-212019-02-21Sap SeAutomatic identification of cloned vehicle identifiers
US20190095730A1 (en)*2017-09-252019-03-28Beijing University Of Posts And TelecommunicationsEnd-To-End Lightweight Method And Apparatus For License Plate Recognition
WO2019177734A1 (en)*2018-03-132019-09-19Recogni Inc.Systems and methods for inter-camera recognition of individuals and their properties
CN109460765A (en)*2018-09-252019-03-12平安科技(深圳)有限公司Driving license is taken pictures recognition methods, device and the electronic equipment of image in natural scene
CN109726715A (en)*2018-12-272019-05-07信雅达系统工程股份有限公司A kind of character image serializing identification, structural data output method
CN109829453A (en)*2018-12-292019-05-31天津车之家数据信息技术有限公司It is a kind of to block the recognition methods of text in card, device and calculate equipment
CN109840524A (en)*2019-01-042019-06-04平安科技(深圳)有限公司Kind identification method, device, equipment and the storage medium of text
CN110378331A (en)*2019-06-102019-10-25南京邮电大学A kind of end-to-end Vehicle License Plate Recognition System and its method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
兰小丽: "基于深度学习的污损车牌检测与识别关键技术研究", no. 7, pages 034 - 260*
杨亭亭;曾洁;刘宾坤;曾奕哲;张育华;: "Android平台的车辆VIN识别系统设计", no. 09, pages 65 - 69*
王明平等: "基于计算机视觉的车架号采集系统", no. 4, pages 239 - 241*
陈桂安: "端到端的自然场景文字检测与识别神经网络的研究与实现", no. 8, pages 138 - 1090*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112215221A (en)*2020-09-222021-01-12国交空间信息技术(北京)有限公司 A method for automatic identification of frame number
CN114170431A (en)*2021-11-052022-03-11多伦科技股份有限公司Complex scene vehicle frame number identification method and device based on edge features

Also Published As

Publication numberPublication date
CN110837838B (en)2023-07-11

Similar Documents

PublicationPublication DateTitle
CN112215223B (en)Multidirectional scene character recognition method and system based on multi-element attention mechanism
CN107423701B (en)Face unsupervised feature learning method and device based on generative confrontation network
US10803359B2 (en)Image recognition method, apparatus, server, and storage medium
US12223668B2 (en)Contour shape recognition method
CN112100337B (en)Emotion recognition method and device in interactive dialogue
CN113254654B (en) Model training, text recognition method, apparatus, equipment and medium
WO2022237027A1 (en)License plate classification method, license plate classification apparatus, and computer-readable storage medium
CN111046971A (en)Image recognition method, device, equipment and computer readable storage medium
CN110390340A (en)The training method and detection method of feature coding model, vision relationship detection model
CN114067300A (en) An end-to-end license plate correction and recognition method
CN114612911B (en)Stroke-level handwritten character sequence recognition method, device, terminal and storage medium
CN111680669A (en)Test question segmentation method and system and readable storage medium
CN110837838A (en)End-to-end frame number identification system and method based on deep learning
CN114092930A (en)Character recognition method and system
CN114913487A (en) A target recognition detection method based on multimodal learning and related components
CN115731453B (en)Chinese character click type identifying code identifying method and system
CN110472655B (en)Marker machine learning identification system and method for cross-border travel
CN119992209A (en) A multimodal ship classification method based on cross-modal multi-stage fusion
CN112380861B (en)Model training method and device and intention recognition method and device
KR20200068073A (en)Improvement of Character Recognition for Parts Book Using Pre-processing of Deep Learning
CN111079749B (en)End-to-end commodity price tag character recognition method and system with gesture correction
CN112508036A (en)Handwritten digit recognition method based on convolutional neural network and codes
CN117953524A (en) An OCR error detection method based on multimodal information fusion
CN110555462A (en)non-fixed multi-character verification code identification method based on convolutional neural network
CN116311269A (en)Formula picture identification question judging system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp