Movatterモバイル変換


[0]ホーム

URL:


CN109901595A - An automatic driving system and method based on monocular camera and Raspberry Pi - Google Patents

An automatic driving system and method based on monocular camera and Raspberry Pi
Download PDF

Info

Publication number
CN109901595A
CN109901595ACN201910303324.5ACN201910303324ACN109901595ACN 109901595 ACN109901595 ACN 109901595ACN 201910303324 ACN201910303324 ACN 201910303324ACN 109901595 ACN109901595 ACN 109901595A
Authority
CN
China
Prior art keywords
model
model car
road conditions
convolutional neural
raspberry pie
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910303324.5A
Other languages
Chinese (zh)
Inventor
戴鸿君
张继刚
鞠雷
许信顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong UniversityfiledCriticalShandong University
Priority to CN201910303324.5ApriorityCriticalpatent/CN109901595A/en
Publication of CN109901595ApublicationCriticalpatent/CN109901595A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

The present invention relates to a kind of automated driving system and method based on monocular cam and raspberry pie, including sequentially connected data collection module, data pre-processing unit, depth convolutional neural networks, control unit;Data collection module is for collecting data set;Data pre-processing unit is for pre-processing the data set being collected into;Depth convolutional neural networks obtain maturity model for training pretreated data set;The maturity model that control unit is obtained for training, makes model car carry out automatic Pilot on model lane.The present invention has abandoned the expensive hardwares such as complicated deep layer network model and radar, realize the automatic Pilot under lesser computing capability and poor hardware condition, allow to lesser cost and cost, it is unmanned using end-to-end neural fusion by simple visual identity i.e. in the case where monocular cam and raspberry pie is used only.

Description

A kind of automated driving system and method based on monocular cam and raspberry pie
Technical field
The invention belongs to the technical fields of image procossing, and in particular to it is a kind of run in raspberry pie based on monocular imageThe automated driving system and method for the depth convolutional neural networks of head.
Background technique
Currently, automatic Pilot technology is fast-developing, it is typically based on more mesh cameras or laser radar, and needRun on the server of height configuration, still, the computing capability that traditional raspberry pie provides is smaller, can not provide tradition nobody driveComputing capability needed for sailing technology, in hardware situation not abundant, i.e., in the case where only providing a monocular cam, byMore detailed traffic information can not be provided in model car traveling in monocular cam, therefore, it is desirable to realize automatic Pilot justIncreasingly complex neural network is needed to handle the image of monocular cam shooting, and still, raspberry pie just as stated earlier providesComputing capability be limited, therefore, more complicated neural network can not be run in raspberry pie, currently without suitable mindThe balance that the two may be implemented through network model is generally imaged using binocular camera or three mesh in existing network modelHead is assisted using modes such as other ancillary equipments such as radar with simple multilayer convolutional neural networks, and Lai Shixian is in raspberry pie meterAutomatic Pilot in calculation ability, but hardware cost is increased in this way, so in the hardware foundation of monocular cam and raspberry pieOn cannot achieve automatic Pilot at this stage.
Therefore, it is badly in need of a kind of scheme that automatic Pilot can be realized under simple environment.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of automatic Pilot system based on monocular cam and raspberry pieSystem, the automatic Pilot method based on monocular cam and raspberry pie that the present invention also provides a kind of;
Term is explained:
OpenCV is the cross-platform computer vision library based on BSD license (open source) distribution, may operate inIn Linux, Windows, Android and Mac OS operating system.Its lightweight and efficiently -- by a series of C functions and a small amount ofC++ class is constituted, while providing the interface of the language such as Python, Ruby, MATLAB, realizes image procossing and computer visionMany general-purpose algorithms of aspect.
The technical solution of the present invention is as follows:
A kind of automated driving system based on monocular cam and raspberry pie, including sequentially connected data collection module,Data pre-processing unit, depth convolutional neural networks, control unit;
For obtaining model car, (model car is to be provided with monocular cam and raspberry pie to the data collection module4WD model of mind vehicle) operation when traffic information, traffic information refers to road conditions picture, and the traffic information that will acquire is sent toData pre-processing unit;The data pre-processing unit for being pre-processed to the road conditions picture received, refer to successively intoRow gray processing, noise reduction, binaryzation, character cutting and normalized;The depth convolutional neural networks are in training for instructingThe data set for practicing pretreated multiple road conditions pictures composition, obtains mature depth convolutional neural networks, runs in model carIn the process, multiple above-mentioned pretreated road conditions pictures are inputted, the control information of model car, the control packet of model car are obtainedInclude steering direction (turn left or turn right), steering angle (left-hand rotation or how many degree of turning right), throttle size;Described control unit is by modelThe control information of vehicle passes to model car, completes model car automatic Pilot.
Preferred according to the present invention, the data collection module refers to the 4WD intelligence for being provided with monocular cam and raspberry pieEnergy model car, monocular cam obtain road conditions picture for shooting.
Preferred according to the present invention, the traffic information of acquisition is transmitted to data by opencv with the format of byte stream and locates in advanceManage unit.
Preferred according to the present invention, raspberry pie model car provides the model car control interface of hardware aspect, passes throughPython writes interface, and the control information of model car is passed to the hardware interface of model car, completes the control information of model carTransmitting.
A kind of automatic Pilot method based on monocular cam and raspberry pie, for realizing model car on model lane fromIt is dynamic to drive, it comprises the following steps that
(1) data set is collected;Data set includes a large amount of road conditions pictures;
(2) data set pre-processes;
(3) using step (2) pretreated data set training depth convolutional neural networks, maturation is obtained by trainingDepth convolutional neural networks;The road conditions picture of mature depth convolutional neural networks input shooting, the control letter of output model vehicleBreath, the control information of model car include that steering direction (turn left or turn right), steering angle (turn left or how many degree of turning right), throttle are bigIt is small;
(4) the mature depth convolutional neural networks obtained using step (3) training, keep model car enterprising in model laneRow automatic Pilot.
Preferred according to the present invention, the step (1) collects data set, refers to: being provided with monocular cam and raspberry pie4WD model of mind vehicle captured in real-time road conditions, obtain a large amount of road conditions pictures.
It is preferred according to the present invention, the step (2), data prediction, comprising:
A, when obtaining road conditions picture, the speed of model car, throttle size, steering direction, steering angle at this time are recorded in real timeIt spends (angular dimension that model car deviates direction of advance at this time), the title of every road conditions picture of mark, storage position, throttle are bigSmall, speed, steering angle and steering direction are recorded as one, and all records are placed in the same .csv file;
B, gray processing, noise reduction, binaryzation, character cutting and normalizing are successively carried out to the road conditions picture after processing of step AChange processing.
Preferred according to the present invention, depth convolutional neural networks structure is as follows: successively including that a tomographic image normalizes layer, fourThe convolutional layer and four layers of full articulamentum of the convolutional layer of layer 5x5, two layers 3x3;
Image normalization
Convolution:5x5,filter:12,strides:2x2,activation:ELU
Convolution:5x5,filter:24,strides:2x2,activation:ELU
Convolution:5x5,filter:36,strides:2x2,activation:ELU
Convolution:5x5,filter:48,strides:2x2,activation:ELU
Convolution:3x3,filter:64,strides:1x1,activation:ELU
Convolution:3x3,filter:64,strides:1x1,activation:ELU
Drop out(0.5)
Fully connected:neurons:100,activation:ELU
Fully connected:neurons:50,activation:ELU
Fully connected:neurons:10,activation:ELU
Fully connected:neurons:1(output)
The depth convolutional neural networks are convolutional neural networks end to end, and entire depth convolutional neural networks are as blackBox, after inputting road conditions picture, firstly, carrying out image by first layer image normalization layer (normalization layers of Image)Pretreatment, then, after four layers of 5x5 convolutional layer and two layers of 3x3 convolutional layer, picture becomes high-dimensional feature figure, passes through 6The convolution of layer completes the feature extraction to road conditions picture, i.e. local feature ratio in extraction image, such as extracts lane lineBending direction and degree export last deflection ratio finally, by four layers of full articulamentum, deflection ratio is prediction mouldThe type vehicle angle next to be deviateed is turned left divided by 90 degree of obtained values, value positive number representative, and negative representative is turned right, greatlyDivided by 90 degree, full articulamentum is equivalent to the feature for extracting convolution and carries out always the small angular dimension for deviateing current driving direction that representsKnot, and predict next steering angle.
It is preferred according to the present invention, the step (4), the mature depth convolutional Neural obtained using step (3) trainingNetwork makes model car carry out automatic Pilot on model lane, refers to:
C, the road conditions picture in model car operational process by monocular cam shooting is passed to mature depth convolutional Neural netNetwork;
D, mature depth convolutional neural networks complete deduction process, export deflection ratio, deflection ratio is prediction modelThe vehicle angle next to be deviateed is turned left divided by 90 degree of obtained values, value positive number representative, and negative representative is turned right, sizeThe angular dimension for deviateing current driving direction is represented divided by 90 degree;
E, after obtaining deflection ratio, throttle size is obtained by calculating;Model car will be by braking and unclamping oil when excessively curvedDoor reduces speed, adjusts throttle after then adjusting the angle again by crossing, therefore steering angle and throttle are closely bound up, when turningWhen very big to angle throttle must unclamp reduce speed come through bend, vice versa.It so can after predicting deviation angleThrottle size (size is percentages), calculation formula is obtained by calculation are as follows: a=1.0-t^2- (v1 ÷ v2) ^2, aRefer to throttle size, t refers to deflection ratio, and v1 refers to speed, and v2 refers to limitation speed, permits when model car present speed v1 is greater thanWhen maximum speed (speed be artificial setting) perhaps, limitation speed v2 be that (speed is manually sets for the minimum speed that allowsSet), otherwise, limitation speed v2 is the maximum speed allowed;
F, the throttle size of acquisition and deflection ratio are passed into raspberry pie model car, so the steering of Controlling model vehicle andAcceleration-deceleration by obtained throttle size and is biased to angle, is connect by the hardware that python code is transmitted to raspberry pie model carMouthful, after which obtains throttle size and be biased to angle, speed and the direction of model car are adjusted, implementation model vehicle is driven automaticallyIt sails.
The invention has the benefit that
It is expensive that the present invention has abandoned complicated deep layer network model and radar etc. compared with current traditional automatic Pilot algorithmHardware realizes the automatic Pilot under lesser computing capability and poor hardware condition, allows to lesser costAnd cost uses end-to-end nerve by simple visual identity that is, in the case where monocular cam and raspberry pie is used onlyNetwork implementations is unmanned.
Detailed description of the invention
Fig. 1 is the structural block diagram of the automated driving system based on monocular cam and raspberry pie;
Fig. 2 is that the present invention is based on the flow diagrams of monocular cam and the automatic Pilot method of raspberry pie;
Specific embodiment
The present invention is further qualified with embodiment with reference to the accompanying drawings of the specification, but not limited to this.
Embodiment 1
A kind of automated driving system based on monocular cam and raspberry pie, as shown in Figure 1, including sequentially connected dataCollector unit, data pre-processing unit, depth convolutional neural networks, control unit;
For obtaining model car, (model car is the 4WD intelligence for being provided with monocular cam and raspberry pie to data collection moduleCan model car) operation when traffic information, it is pre- that the traffic information that traffic information refers to road conditions picture, and will acquire is sent to dataProcessing unit;Data pre-processing unit refers to for pre-processing to the road conditions picture received and successively carries out gray processing, dropIt makes an uproar, binaryzation, character cutting and normalized;Depth convolutional neural networks in training for train it is pretreated multipleThe data set of road conditions picture composition, obtains mature depth convolutional neural networks, in model car operational process, inputs above-mentioned pre-Treated multiple road conditions pictures, obtain the control information of model car, the control information of model car include steering direction (turn left orTurn right), steering angle (turn left or turn right how many degree), throttle size;The control information of model car is passed to mould by control unitType vehicle completes model car automatic Pilot.
Data collection module refers to that the 4WD model of mind vehicle for being provided with monocular cam and raspberry pie, monocular cam are usedRoad conditions picture is obtained in shooting.
The traffic information of acquisition is transmitted to data pre-processing unit by opencv with the format of byte stream.
Raspberry pie model car provides the model car control interface of hardware aspect, writes interface by python, by modelThe control information of vehicle passes to the hardware interface of model car, completes the transmitting of the control information of model car.
Depth convolutional neural networks structure is as follows: successively including tomographic image normalization layer, the convolutional layer of four layers of 5x5, twoThe convolutional layer and four layers of full articulamentum of layer 3x3;
Image normalization
Convolution:5x5,filter:12,strides:2x2,activation:ELU
Convolution:5x5,filter:24,strides:2x2,activation:ELU
Convolution:5x5,filter:36,strides:2x2,activation:ELU
Convolution:5x5,filter:48,strides:2x2,activation:ELU
Convolution:3x3,filter:64,strides:1x1,activation:ELU
Convolution:3x3,filter:64,strides:1x1,activation:ELU
Drop out(0.5)
Fully connected:neurons:100,activation:ELU
Fully connected:neurons:50,activation:ELU
Fully connected:neurons:10,activation:ELU
Fully connected:neurons:1(output)
The depth convolutional neural networks are convolutional neural networks end to end, and entire depth convolutional neural networks are as blackBox has taken into account the computing capability of raspberry pie and the accuracy of model;After inputting road conditions picture, firstly, returning by the first tomographic imageOne changes layer (Image
Normalization layers) image preprocessing is carried out, then, by four layers of 5x5 convolutional layer and 3x3 volumes of two layersAfter lamination, picture becomes high-dimensional feature figure, by 6 layers of convolution, completes the feature extraction to road conditions picture, that is, extracts figureLocal feature ratio as in, such as the bending direction and degree of lane line are extracted, finally, by four layers of full articulamentum, it is defeatedLast deflection ratio out, deflection ratio are the prediction model vehicle angles next to be deviateed divided by 90 degree of obtained values, shouldThe representative of value positive number is turned left, and negative representative is turned right, and size represents the angular dimension for deviateing current driving direction divided by 90 degree, entirelyArticulamentum, which is equivalent to, summarizes the feature that convolution is extracted, and predicts next steering angle.
Embodiment 2
A kind of automatic Pilot method based on monocular cam and raspberry pie, as shown in Fig. 2, for realizing model car in mouldAutomatic Pilot on type lane, comprises the following steps that
(1) data set is collected;Data set includes about 100,000 or so road conditions pictures;
(2) data set pre-processes;
(3) using step (2) pretreated data set training depth convolutional neural networks, maturation is obtained by trainingDepth convolutional neural networks;The road conditions picture of mature depth convolutional neural networks input shooting, the control letter of output model vehicleBreath, the control information of model car include that steering direction (turn left or turn right), steering angle (turn left or how many degree of turning right), throttle are bigIt is small;
(4) the mature depth convolutional neural networks obtained using step (3) training, keep model car enterprising in model laneRow automatic Pilot.
Step (1) collects data set, and refer to: the 4WD model of mind vehicle for being provided with monocular cam and raspberry pie is clapped in real timeRoad conditions are taken the photograph, a large amount of road conditions pictures are obtained.
Step (2), data prediction, comprising:
A, when obtaining road conditions picture, the speed of model car, throttle size, steering direction, steering angle at this time are recorded in real timeIt spends (angular dimension that model car deviates direction of advance at this time), the title of every road conditions picture of mark, storage position, throttle are bigSmall, speed, steering angle and steering direction are recorded as one, and all records are placed in the same .csv file;
B, gray processing, noise reduction, binaryzation, character cutting and normalizing are successively carried out to the road conditions picture after processing of step AChange processing.
Step (4), the mature depth convolutional neural networks obtained using step (3) training, makes model car in model carAutomatic Pilot is carried out on road, is referred to:
C, the road conditions picture in model car operational process by monocular cam shooting is passed to mature depth convolutional Neural netNetwork;
D, mature depth convolutional neural networks complete deduction process, export deflection ratio, deflection ratio is prediction modelThe vehicle angle next to be deviateed is turned left divided by 90 degree of obtained values, value positive number representative, and negative representative is turned right, sizeThe angular dimension for deviateing current driving direction is represented divided by 90 degree;
E, after obtaining deflection ratio, throttle size is obtained by calculating;Model car will be by braking and unclamping oil when excessively curvedDoor reduces speed, adjusts throttle after then adjusting the angle again by crossing, therefore steering angle and throttle are closely bound up, when turningWhen very big to angle throttle must unclamp reduce speed come through bend, vice versa.It so can after predicting deviation angleThrottle size (size is percentages), calculation formula is obtained by calculation are as follows: a=1.0-t^2- (v1 ÷ v2) ^2, aRefer to throttle size, t refers to deflection ratio, and v1 refers to speed, and v2 refers to limitation speed, permits when model car present speed v1 is greater thanWhen maximum speed (speed be artificial setting) perhaps, limitation speed v2 be that (speed is manually sets for the minimum speed that allowsSet), otherwise, limitation speed v2 is the maximum speed allowed;
F, the throttle size of acquisition and deflection ratio are passed into raspberry pie model car, so the steering of Controlling model vehicle andAcceleration-deceleration by obtained throttle size and is biased to angle, is connect by the hardware that python code is transmitted to raspberry pie model carMouthful, after which obtains throttle size and be biased to angle, speed and the direction of model car are adjusted, implementation model vehicle is driven automaticallyIt sails.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the artFor member, without departing from the principle of the present invention, it can also make several improvements and retouch, these improvements and modifications are also answeredIt is considered as protection scope of the present invention.
Main target of the present invention is to complete automatic Pilot in the raspberry pie of monocular cam and lower computing capability, thereforeThis experiment handles the traffic information of monocular cam shooting using simple end-to-end multilayer convolutional neural networks, thusTo the steering angle information of model car, the throttle size predicted value and speed of model car are obtained by relevant calculation on this basisModel car control information is finally transmitted to model car control module, the acceleration-deceleration of Controlling model vehicle and turning by predicted value.

Claims (9)

The data collection module is used to obtain traffic information when model car operation, and traffic information refers to road conditions picture, and willThe traffic information of acquisition is sent to data pre-processing unit;The data pre-processing unit be used for the road conditions picture received intoRow pretreatment, refers to and successively carries out gray processing, noise reduction, binaryzation, character cutting and normalized;The depth convolutional NeuralNetwork, for training the data set of pretreated multiple road conditions pictures composition, obtains mature depth convolutional Neural in trainingNetwork inputs multiple above-mentioned pretreated road conditions pictures, obtains the control information of model car in model car operational process,The control information of model car includes steering direction, steering angle, throttle size;Described control unit is by the control information of model carModel car is passed to, model car automatic Pilot is completed.
After inputting road conditions picture, firstly, image preprocessing is carried out by first layer image normalization layer, then, by four layersAfter 5x5 convolutional layer and two layers of 3x3 convolutional layer, picture becomes high-dimensional feature figure, by 6 layers of convolution, completes to road conditionsThe feature extraction of picture, i.e. local feature ratio in extraction image export last inclined finally, by four layers of full articulamentumTurn ratio, deflection ratio is the prediction model vehicle angle next to be deviateed divided by 90 degree of obtained values, which representsIt turns left, negative representative is turned right, and size represents the angular dimension for deviateing current driving direction divided by 90 degree, and full articulamentum is suitableIt summarizes in the feature for extracting convolution, and predicts next steering angle.
CN201910303324.5A2019-04-162019-04-16 An automatic driving system and method based on monocular camera and Raspberry PiPendingCN109901595A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910303324.5ACN109901595A (en)2019-04-162019-04-16 An automatic driving system and method based on monocular camera and Raspberry Pi

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910303324.5ACN109901595A (en)2019-04-162019-04-16 An automatic driving system and method based on monocular camera and Raspberry Pi

Publications (1)

Publication NumberPublication Date
CN109901595Atrue CN109901595A (en)2019-06-18

Family

ID=66954899

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910303324.5APendingCN109901595A (en)2019-04-162019-04-16 An automatic driving system and method based on monocular camera and Raspberry Pi

Country Status (1)

CountryLink
CN (1)CN109901595A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110244734A (en)*2019-06-202019-09-17中山大学 A method for path planning of autonomous vehicles based on deep convolutional neural network
CN110534009A (en)*2019-09-052019-12-03北京青橙创客教育科技有限公司A kind of unmanned course teaching aid of artificial intelligence
CN111142519A (en)*2019-12-172020-05-12西安工业大学 Automatic driving system and control method based on computer vision and ultrasonic radar redundancy
CN111488418A (en)*2020-03-092020-08-04北京百度网讯科技有限公司 Vehicle posture correction method, device, equipment and storage medium
CN111506067A (en)*2020-04-202020-08-07上海电子信息职业技术学院 smart model car
CN112785466A (en)*2020-12-312021-05-11科大讯飞股份有限公司AI enabling method and device of hardware, storage medium and equipment
CN112966653A (en)*2021-03-292021-06-15深圳市优必选科技股份有限公司Line patrol model training method, line patrol method and line patrol system
US11403069B2 (en)2017-07-242022-08-02Tesla, Inc.Accelerated mathematical engine
US11409692B2 (en)2017-07-242022-08-09Tesla, Inc.Vector computational unit
US11487288B2 (en)2017-03-232022-11-01Tesla, Inc.Data synthesis for autonomous control systems
US11537811B2 (en)2018-12-042022-12-27Tesla, Inc.Enhanced object detection for autonomous vehicles based on field view
US11562231B2 (en)2018-09-032023-01-24Tesla, Inc.Neural networks for embedded devices
US11561791B2 (en)2018-02-012023-01-24Tesla, Inc.Vector computational unit receiving data elements in parallel from a last row of a computational array
US11567514B2 (en)2019-02-112023-01-31Tesla, Inc.Autonomous and user controlled vehicle summon to a target
US11610117B2 (en)2018-12-272023-03-21Tesla, Inc.System and method for adapting a neural network model on a hardware platform
US11636333B2 (en)2018-07-262023-04-25Tesla, Inc.Optimizing neural network structures for embedded systems
US11665108B2 (en)2018-10-252023-05-30Tesla, Inc.QoS manager for system on a chip communications
US11681649B2 (en)2017-07-242023-06-20Tesla, Inc.Computational array microprocessor system using non-consecutive data formatting
US11734562B2 (en)2018-06-202023-08-22Tesla, Inc.Data pipeline and deep learning system for autonomous driving
US11748620B2 (en)2019-02-012023-09-05Tesla, Inc.Generating ground truth for machine learning from time series elements
US11790664B2 (en)2019-02-192023-10-17Tesla, Inc.Estimating object properties using visual image data
US11816585B2 (en)2018-12-032023-11-14Tesla, Inc.Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en)2018-07-202023-12-12Tesla, Inc.Annotation cross-labeling for autonomous control systems
US11893774B2 (en)2018-10-112024-02-06Tesla, Inc.Systems and methods for training machine models with augmented data
US11893393B2 (en)2017-07-242024-02-06Tesla, Inc.Computational array microprocessor system with hardware arbiter managing memory requests
US12014553B2 (en)2019-02-012024-06-18Tesla, Inc.Predicting three-dimensional features for autonomous driving
US12307350B2 (en)2018-01-042025-05-20Tesla, Inc.Systems and methods for hardware-based pooling

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2015138981A1 (en)*2014-03-142015-09-17ElectroCore, LLCDevices and methods for treating medical disorders with evoked potentials and vagus nerve stimulation
EP2942855A1 (en)*2014-05-082015-11-11Rheinisch-Westfälisch-Technische Hochschule AachenMethod and system for monitoring distribution systems
CN107167580A (en)*2016-12-172017-09-15重庆大学Pot hole detection method based on acceleration transducer and machine learning
CN108438004A (en)*2018-03-052018-08-24长安大学Lane departure warning system based on monocular vision
CN108620950A (en)*2018-05-082018-10-09华中科技大学无锡研究院A kind of turning cutting tool drilling monitoring method and system
CN108664028A (en)*2018-05-212018-10-16南昌航空大学Convenient for the omnidirectional vision intelligent carriage of secondary development
CN108830171A (en)*2018-05-242018-11-16中山大学A kind of Intelligent logistics warehouse guide line visible detection method based on deep learning
CN108960308A (en)*2018-06-252018-12-07中国科学院自动化研究所Traffic sign recognition method, device, car-mounted terminal and vehicle
CN109446919A (en)*2018-09-302019-03-08贵州大学A kind of vision lane keeping method based on end-to-end study
CN109459037A (en)*2018-12-292019-03-12南京师范大学镇江创新发展研究院A kind of environment information acquisition method and system based on SLAM intelligence carrier
CN109471732A (en)*2018-11-222019-03-15山东大学 A data allocation method for CPU-FPGA heterogeneous multi-core system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2015138981A1 (en)*2014-03-142015-09-17ElectroCore, LLCDevices and methods for treating medical disorders with evoked potentials and vagus nerve stimulation
EP2942855A1 (en)*2014-05-082015-11-11Rheinisch-Westfälisch-Technische Hochschule AachenMethod and system for monitoring distribution systems
CN107167580A (en)*2016-12-172017-09-15重庆大学Pot hole detection method based on acceleration transducer and machine learning
CN108438004A (en)*2018-03-052018-08-24长安大学Lane departure warning system based on monocular vision
CN108620950A (en)*2018-05-082018-10-09华中科技大学无锡研究院A kind of turning cutting tool drilling monitoring method and system
CN108664028A (en)*2018-05-212018-10-16南昌航空大学Convenient for the omnidirectional vision intelligent carriage of secondary development
CN108830171A (en)*2018-05-242018-11-16中山大学A kind of Intelligent logistics warehouse guide line visible detection method based on deep learning
CN108960308A (en)*2018-06-252018-12-07中国科学院自动化研究所Traffic sign recognition method, device, car-mounted terminal and vehicle
CN109446919A (en)*2018-09-302019-03-08贵州大学A kind of vision lane keeping method based on end-to-end study
CN109471732A (en)*2018-11-222019-03-15山东大学 A data allocation method for CPU-FPGA heterogeneous multi-core system
CN109459037A (en)*2018-12-292019-03-12南京师范大学镇江创新发展研究院A kind of environment information acquisition method and system based on SLAM intelligence carrier

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICHAEL G. BECHTEL,等: "A Low-cost Deep Neural Network-based Autonomous Car", 《2018 IEEE 24TH INTERNATIONAL CONFERENCE ON EMBEDDED AND REAL-TIME COMPUTING SYSTEMS AND APPLICATIONS》*
李云伍,等: "丘陵山区田间道路自主行驶转运车及其视觉导航系统研制", 《农业工程学报》*

Cited By (43)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US12020476B2 (en)2017-03-232024-06-25Tesla, Inc.Data synthesis for autonomous control systems
US11487288B2 (en)2017-03-232022-11-01Tesla, Inc.Data synthesis for autonomous control systems
US11409692B2 (en)2017-07-242022-08-09Tesla, Inc.Vector computational unit
US11681649B2 (en)2017-07-242023-06-20Tesla, Inc.Computational array microprocessor system using non-consecutive data formatting
US11893393B2 (en)2017-07-242024-02-06Tesla, Inc.Computational array microprocessor system with hardware arbiter managing memory requests
US12086097B2 (en)2017-07-242024-09-10Tesla, Inc.Vector computational unit
US12216610B2 (en)2017-07-242025-02-04Tesla, Inc.Computational array microprocessor system using non-consecutive data formatting
US11403069B2 (en)2017-07-242022-08-02Tesla, Inc.Accelerated mathematical engine
US12307350B2 (en)2018-01-042025-05-20Tesla, Inc.Systems and methods for hardware-based pooling
US11797304B2 (en)2018-02-012023-10-24Tesla, Inc.Instruction set architecture for a vector computational unit
US11561791B2 (en)2018-02-012023-01-24Tesla, Inc.Vector computational unit receiving data elements in parallel from a last row of a computational array
US11734562B2 (en)2018-06-202023-08-22Tesla, Inc.Data pipeline and deep learning system for autonomous driving
US11841434B2 (en)2018-07-202023-12-12Tesla, Inc.Annotation cross-labeling for autonomous control systems
US12079723B2 (en)2018-07-262024-09-03Tesla, Inc.Optimizing neural network structures for embedded systems
US11636333B2 (en)2018-07-262023-04-25Tesla, Inc.Optimizing neural network structures for embedded systems
US11983630B2 (en)2018-09-032024-05-14Tesla, Inc.Neural networks for embedded devices
US11562231B2 (en)2018-09-032023-01-24Tesla, Inc.Neural networks for embedded devices
US12346816B2 (en)2018-09-032025-07-01Tesla, Inc.Neural networks for embedded devices
US11893774B2 (en)2018-10-112024-02-06Tesla, Inc.Systems and methods for training machine models with augmented data
US11665108B2 (en)2018-10-252023-05-30Tesla, Inc.QoS manager for system on a chip communications
US11816585B2 (en)2018-12-032023-11-14Tesla, Inc.Machine learning models operating at different frequencies for autonomous vehicles
US12367405B2 (en)2018-12-032025-07-22Tesla, Inc.Machine learning models operating at different frequencies for autonomous vehicles
US12198396B2 (en)2018-12-042025-01-14Tesla, Inc.Enhanced object detection for autonomous vehicles based on field view
US11908171B2 (en)2018-12-042024-02-20Tesla, Inc.Enhanced object detection for autonomous vehicles based on field view
US11537811B2 (en)2018-12-042022-12-27Tesla, Inc.Enhanced object detection for autonomous vehicles based on field view
US12136030B2 (en)2018-12-272024-11-05Tesla, Inc.System and method for adapting a neural network model on a hardware platform
US11610117B2 (en)2018-12-272023-03-21Tesla, Inc.System and method for adapting a neural network model on a hardware platform
US11748620B2 (en)2019-02-012023-09-05Tesla, Inc.Generating ground truth for machine learning from time series elements
US12014553B2 (en)2019-02-012024-06-18Tesla, Inc.Predicting three-dimensional features for autonomous driving
US12223428B2 (en)2019-02-012025-02-11Tesla, Inc.Generating ground truth for machine learning from time series elements
US11567514B2 (en)2019-02-112023-01-31Tesla, Inc.Autonomous and user controlled vehicle summon to a target
US12164310B2 (en)2019-02-112024-12-10Tesla, Inc.Autonomous and user controlled vehicle summon to a target
US12236689B2 (en)2019-02-192025-02-25Tesla, Inc.Estimating object properties using visual image data
US11790664B2 (en)2019-02-192023-10-17Tesla, Inc.Estimating object properties using visual image data
CN110244734A (en)*2019-06-202019-09-17中山大学 A method for path planning of autonomous vehicles based on deep convolutional neural network
CN110534009A (en)*2019-09-052019-12-03北京青橙创客教育科技有限公司A kind of unmanned course teaching aid of artificial intelligence
CN111142519A (en)*2019-12-172020-05-12西安工业大学 Automatic driving system and control method based on computer vision and ultrasonic radar redundancy
CN111488418B (en)*2020-03-092023-07-28阿波罗智能技术(北京)有限公司Vehicle pose correction method, device, equipment and storage medium
CN111488418A (en)*2020-03-092020-08-04北京百度网讯科技有限公司 Vehicle posture correction method, device, equipment and storage medium
CN111506067A (en)*2020-04-202020-08-07上海电子信息职业技术学院 smart model car
CN112785466A (en)*2020-12-312021-05-11科大讯飞股份有限公司AI enabling method and device of hardware, storage medium and equipment
CN112966653B (en)*2021-03-292023-12-19深圳市优必选科技股份有限公司Line inspection model training method, line inspection method and line inspection system
CN112966653A (en)*2021-03-292021-06-15深圳市优必选科技股份有限公司Line patrol model training method, line patrol method and line patrol system

Similar Documents

PublicationPublication DateTitle
CN109901595A (en) An automatic driving system and method based on monocular camera and Raspberry Pi
CN114820702B (en) A multi-target pedestrian tracking method from drone perspective based on yolov5 Deepsort
CN110924340B (en)Mobile robot system for intelligently picking up garbage and implementation method
CN114723955A (en)Image processing method, device, equipment and computer readable storage medium
CN110874578A (en)Unmanned aerial vehicle visual angle vehicle identification and tracking method based on reinforcement learning
CN107351080B (en) A hybrid intelligent research system and control method based on camera unit array
CN205693767U (en)Uas
CN102945554A (en)Target tracking method based on learning and speeded-up robust features (SURFs)
Hua et al.Light-weight UAV object tracking network based on strategy gradient and attention mechanism
CN110232361A (en)Human body behavior intension recognizing method and system based on the dense network of three-dimensional residual error
CN118254827A (en) New energy unmanned vehicle road safety emergency response system
CN116453020A (en) A binocular recognition method, system, device and medium
CN110022422A (en)A kind of sequence of frames of video generation method based on intensive connection network
CN109299656A (en) A method for determining the depth of view of a vehicle vision system scene
CN114326821A (en)Unmanned aerial vehicle autonomous obstacle avoidance system and method based on deep reinforcement learning
CN112785564B (en)Pedestrian detection tracking system and method based on mechanical arm
CN109919107B (en)Traffic police gesture recognition method based on deep learning and unmanned vehicle
Wang et al.End-to-end driving simulation via angle branched network
US12094221B2 (en)Embedded deep learning multi-scale object detection model using real-time distant region locating device and method thereof
CN119975377A (en) A vehicle emergency control method and system under abnormal driver behavior
CN113848884B (en)Unmanned engineering machinery decision method based on feature fusion and space-time constraint
Schenkel et al.Domain adaptation for semantic segmentation using convolutional neural networks
CN119598140A (en) A UAV target tracking method based on large and small model collaboration
CN118468217B (en)Driving control method and system based on personalized federal contrast learning
CN113382304B (en)Video stitching method based on artificial intelligence technology

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20190618


[8]ページ先頭

©2009-2025 Movatter.jp