Movatterモバイル変換


[0]ホーム

URL:


CN116386026B - Training method of point cloud 3D detection model and point cloud detection method - Google Patents

Training method of point cloud 3D detection model and point cloud detection method

Info

Publication number
CN116386026B
CN116386026BCN202310340658.6ACN202310340658ACN116386026BCN 116386026 BCN116386026 BCN 116386026BCN 202310340658 ACN202310340658 ACN 202310340658ACN 116386026 BCN116386026 BCN 116386026B
Authority
CN
China
Prior art keywords
point cloud
iteration round
detection model
training data
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310340658.6A
Other languages
Chinese (zh)
Other versions
CN116386026A (en
Inventor
鞠波
邹智康
叶晓青
谭啸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co LtdfiledCriticalBeijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310340658.6ApriorityCriticalpatent/CN116386026B/en
Publication of CN116386026ApublicationCriticalpatent/CN116386026A/en
Application grantedgrantedCritical
Publication of CN116386026BpublicationCriticalpatent/CN116386026B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The disclosure provides a training method and a point cloud detection method of a point cloud 3D detection model, relates to the technical field of artificial intelligence, and particularly relates to the technical fields of deep learning, point cloud detection and automatic driving. The method comprises the steps of obtaining first training data and second training data, wherein the first training data are point cloud data obtained from a marked 3D detection frame, the second training data are point cloud data collected by a radar, obtaining target number of the first training data, which are required to be input into the point cloud 3D detection model, under each iteration round, the target number is associated with the iteration round of the point cloud 3D detection model, and inputting the second training data and the first training data, which correspond to the iteration round, into the point cloud 3D detection model for any iteration round in the point cloud 3D detection model training process so as to train the point cloud 3D detection model.

Description

Training method of point cloud 3D detection model and point cloud detection method
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of deep learning, point cloud detection and automatic driving, and specifically relates to a training method of a point cloud 3D detection model, a point cloud detection method, a training device of the point cloud 3D detection model, a point cloud detection device, electronic equipment, a storage medium, a computer program product and an automatic driving vehicle.
Background
The laser radar plays an important role in the automatic driving system, and by utilizing the laser radar, the automatic driving system can accurately model the environment where the vehicle is located in real time three-dimensional (3D), so that the safety of the automatic driving system can be improved, and meanwhile, the position, the size and the gesture of a certain 3D target in a laser radar point cloud coordinate system can be accurately perceived. At present, the detection task of the point cloud 3D target is usually realized through a neural network model.
Disclosure of Invention
The disclosure provides a training method of a point cloud 3D detection model, a point cloud detection method, a training device of the point cloud 3D detection model, a point cloud detection device, electronic equipment, a storage medium, a computer program product and an automatic driving vehicle.
According to a first aspect of the present disclosure, there is provided a training method of a point cloud 3D detection model, including:
Acquiring first training data and second training data, wherein the first training data are point cloud data acquired from a marked 3D detection frame, and the second training data are point cloud data acquired by a radar;
Acquiring target number of the first training data to be input into the point cloud 3D detection model under each iteration round, wherein the target number is associated with the iteration round of the point cloud 3D detection model;
Inputting the second training data and the first training data of the target number corresponding to the iterative round into the point cloud 3D detection model aiming at any iterative round in the training process of the point cloud 3D detection model so as to train the point cloud 3D detection model;
the input of the trained point cloud 3D detection model is point cloud data acquired by a radar, and the output is a 3D detection frame.
According to a second aspect of the present disclosure, there is provided a point cloud detection method, including:
acquiring point cloud data acquired by a radar;
inputting the point cloud data into a point cloud 3D detection model, and acquiring a 3D detection frame output by the point cloud 3D detection model;
The point cloud 3D detection model is a model obtained after training based on the training method of the point cloud 3D detection model according to the first aspect.
According to a third aspect of the present disclosure, there is provided a training apparatus of a point cloud 3D detection model, including:
The first acquisition module is used for acquiring first training data and second training data, wherein the first training data are point cloud data acquired from the marked 3D detection frame, and the second training data are point cloud data acquired by a radar;
The second acquisition module is used for acquiring the target number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round, and the target number is associated with the iteration round of the point cloud 3D detection model;
the training module is used for inputting the second training data and the first training data of the target number corresponding to any iteration round in the training process of the point cloud 3D detection model into the point cloud 3D detection model so as to train the point cloud 3D detection model;
the input of the trained point cloud 3D detection model is point cloud data acquired by a radar, and the output is a 3D detection frame.
According to a fourth aspect of the present disclosure, there is provided a point cloud detection apparatus, including:
the third acquisition module is used for acquiring point cloud data acquired by the radar;
The fourth acquisition module is used for inputting the point cloud data into a point cloud 3D detection model and acquiring a 3D detection frame output by the point cloud 3D detection model;
the point cloud 3D detection model is a model obtained after training by a training device based on the point cloud 3D detection model according to the third aspect.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first or second aspect.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first or second aspect.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first or second aspect.
According to an eighth aspect of the present disclosure, there is provided an autonomous vehicle comprising the point cloud detection apparatus as described in the fourth aspect.
In the embodiment of the disclosure, the number of the first training data input into the point cloud 3D detection model is associated with the iteration round of the model, so that the training data input into the point cloud 3D detection model is variable, and therefore control of the input data into the point cloud 3D detection model can be achieved, training of the point cloud 3D detection model is more flexible, and the accuracy of the trained point cloud 3D detection model can be improved by controlling the number of the input data.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a training method of a point cloud 3D detection model provided in an embodiment of the present disclosure;
fig. 2 is a relationship diagram of the input number of first training data and iteration rounds in a training method of a point cloud 3D detection model according to an embodiment of the present disclosure;
Fig. 3 is a flowchart of a point cloud detection method provided in an embodiment of the present disclosure;
FIG. 4 is one of the block diagrams of a training device for a point cloud 3D detection model provided in an embodiment of the present disclosure;
FIG. 5 is a second block diagram of a training device for a point cloud 3D detection model according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a point cloud detecting device according to an embodiment of the present disclosure;
Fig. 7 is a block diagram of an electronic device used to implement a training method or a point cloud detection method of a point cloud 3D detection model of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
For a better understanding, related concepts and principles that may be involved in embodiments of the disclosure are explained below.
The point cloud 3D detection task is to perform real-time 3D modeling on the current environment through a laser radar, and simultaneously sense the position, the size and the gesture of a certain 3D target in a laser radar point cloud coordinate system. Typically, the data collected by lidar is displayed and processed in the form of a point cloud, simply N points in three parts of space, each point containing XYZ three floating point values representing the spatial location and one R value representing the echo intensity, which is a data containing the object surface geometry information. In the related art, point cloud 3D detection is generally implemented through a point cloud 3D detection model.
Referring to fig. 1, fig. 1 is a flowchart of a training method of a point cloud 3D detection model according to an embodiment of the disclosure, as shown in fig. 1, the method includes the following steps:
step S101, acquiring first training data and second training data, wherein the first training data are point cloud data acquired from a marked 3D detection frame, and the second training data are point cloud data acquired by a radar.
It should be noted that, the method provided by the embodiment of the present disclosure may be applied to an electronic device such as a computer, a tablet computer, a mobile phone, etc., and in the subsequent embodiments, a specific implementation process of the method provided by the embodiment of the present disclosure will be explained by using the electronic device as an execution body.
In this embodiment of the present disclosure, the first training data may be point cloud data corresponding to a point cloud object in a manually marked 3D detection frame (may also be referred to as a 3D bounding box, etc.), for example, after acquiring point cloud data acquired by a radar for a vehicle, a user manually marks a 3D detection frame on the vehicle based on the point cloud data, and then the first training data is the point cloud data corresponding to the vehicle. It should be noted that, the user may acquire point cloud data through a large number of manually marked 3D detection frames, obtain a point cloud database based on the point cloud data, and the electronic device may acquire the first training data from the point cloud database, for example, may randomly select a part of the point cloud data as the first training data.
Alternatively, the second training data may be point cloud data collected by a lidar on a vehicle, which may be an autonomous vehicle.
Step S102, obtaining target number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round, wherein the target number is associated with the iteration round of the point cloud 3D detection model.
It will be appreciated that neural network models typically need to be obtained through multiple rounds of iterative training. The point cloud 3D detection model in the embodiment of the disclosure is also required to be obtained through multiple rounds of iterative training, one iteration round represents performing iterative training on the point cloud 3D detection model, and the number of training data input by the point cloud 3D detection model can be different under each iteration round.
In an embodiment of the disclosure, the iteration round of the point cloud 3D detection model is associated with the number of first training data that need to be input. Alternatively, an association relationship between the iteration round and the number of first training data to be input may be preset. For example, the association relationship may be that as the iteration round increases, the amount of the first training data that needs to be input decreases. Or the number of the first training data to be input under each iteration round is set to be x under the first N iteration rounds, and after the nth iteration round, the number of the first training data to be input under each iteration round is set to be y, wherein x is different from y. Of course, the association between the iteration round and the number of first training data to be input may be other possible situations, which are not listed here too much.
Step S103, inputting the second training data and the first training data of the target number corresponding to any iteration round in the training process of the point cloud 3D detection model into the point cloud 3D detection model so as to train the point cloud 3D detection model.
For example, taking a first target iteration round as an example, the first target iteration round is any iteration round in the training process of the point cloud 3D detection model, and assuming that the target number of first training data, which is corresponding to the first target iteration round and needs to be input into the point cloud 3D detection model, is x, when the point cloud 3D detection model performs training on the first target iteration round, second training data and the first training data, which is the target number of x, are input into the point cloud 3D detection model, so as to perform training on the point cloud 3D detection model under the first target iteration round. It can be understood that, for any iteration round in the training process of the point cloud 3D detection model, the model training can be performed in the above manner, that is, the second training data and the first training data of the target number corresponding to the iteration round are input into the point cloud 3D detection model, so as to realize the training of the point cloud 3D detection model in the iteration round. Therefore, training of the point cloud 3D detection model in all iteration rounds can be completed based on the mode.
In the embodiment of the disclosure, after the target number of the first training data of the point cloud 3D detection model is acquired and needs to be input under each iteration round, the first training data and the second training data of the target number corresponding to the iteration round are input into the point cloud 3D detection model for each iteration round, so that the point cloud 3D detection model is trained for the iteration round, and based on the mode, training of the point cloud 3D detection model in all iteration rounds is completed, so that the trained point cloud 3D detection model is obtained.
It should be noted that, in the training process of each iteration round of the point cloud 3D detection model, the number of second training data input may be the same, while the number of first training data input is related to the current iteration round, that is, the number of first training data input into the point cloud 3D detection model may be different in different iteration rounds, that is, the number of input point cloud data obtained through the manually-labeled 3D detection frame may be different. Therefore, the training data input by the point cloud 3D detection model is variable, so that the control of the input data of the point cloud 3D detection model can be realized, the training of the point cloud 3D detection model is more flexible, and the accuracy of the trained point cloud 3D detection model can be improved by controlling the quantity of the input data.
In the embodiment of the disclosure, the trained point cloud 3D detection model may be applied to an automatic driving vehicle, and the input of the trained point cloud 3D detection model, that is, the point cloud data collected by the laser radar on the automatic driving vehicle, is output as a 3D detection frame, so as to better assist the automatic driving of the automatic driving vehicle, and improve the safety of the automatic driving vehicle.
Optionally, in step S102, obtaining the target number of the first training data to be input into the point cloud 3D detection model in each iteration round may specifically further include:
Under the condition that the iteration rounds are the first N iteration rounds, determining the target number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round as a first preset number;
Under the condition that the iteration round is an iteration round after the Nth iteration round, determining that the target number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round is a second preset number;
The first preset number is greater than the second preset number, the value of N is smaller than the value of the last iteration round of the point cloud 3D detection model, and the value of N may be a preset numerical value.
In the embodiment of the disclosure, the target number of the first training data to be input under each iteration round is set to a first preset number in the first N iteration rounds of the point cloud 3D detection model, the target number of the first training data to be input under each iteration round is set to a second preset number after the nth iteration round, and the second preset number is smaller than the first preset number. That is, after the point cloud 3D detection model completes a certain number of iterative training, the number of first training data input into the point cloud 3D detection model may be reduced for the training of the subsequent iteration round. The first training data is point cloud data acquired based on a 3D detection frame of manual annotation, the point cloud data may not be related to the current test scene, and the second training data is point cloud data acquired by the laser radar, that is, the second training data is acquired based on the current test scene on-site, and the second training data is related to the current test scene.
For the point cloud 3D detection model, if training data and test data of the model are respectively directed to different scenes, a problem of domain gap may occur. Domain gaps mean that there is a significant distribution difference between the two parts of data, and if one data is used for training and one data is used for testing, the performance of the model obtained by training and testing from the unified distribution data is worse. For example, a most visual situation of a domain gap exists, that is, a batch of training data is assumed to be collected on a certain loop of city a, a batch of test data is also assumed to be collected on a certain loop of city B, a model trained by taking the training data of city a runs on the test data of city B, and the effect is definitely not good, because obvious distribution differences exist between two data sets, such as the size of length and width of a vehicle, the distribution of road traffic participants and point cloud background objects, and the like, are not seen by the model during training, so the model can be degraded during testing, and the accuracy is lower.
In the embodiment of the disclosure, by reducing the number of the first training data, the influence of the first training data which is irrelevant to the current test scene on the point cloud 3D detection model can be reduced in the training process of the point cloud 3D detection model, so that the point cloud 3D detection model is more focused on the second training data which is relevant to the current test scene to finish training in the later stage of training, the point cloud 3D detection model obtained after training can be more applicable to the current test scene, the generation of domain clearance problems is effectively avoided, and the accuracy of the point cloud 3D detection model after training is improved.
Optionally, in the case that the iteration round is an iteration round after the nth iteration round, determining that the target number of the first training data required to be input into the point cloud 3D detection model under each iteration round is a second preset number includes:
Under the condition that the iteration round is an iteration round after the nth iteration round, determining that the values of N are 1,2 and 3 from the (n+1) th iteration round to the (n+n) th iteration round, wherein the value of n+n is smaller than or equal to the value of the last iteration round of the point cloud 3D detection model;
and determining a second preset number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round in the (N+1) -th to (N+n) -th iteration rounds, wherein the second preset number gradually decreases along with the increase of the iteration round value.
For example, assuming that the iterative training of the point cloud 3D detection model includes 100 iteration rounds in total, the value of N is 60, then the number of first training data to be input to the point cloud 3D detection model in each iteration round is a first preset number in the first 60 iteration rounds, and assuming that the value of N is 20, that is, the number of first training data to be input to the point cloud 3D detection model in each iteration round gradually decreases in the 61 th to 80 th iteration rounds, for example, the number of first training data decreases linearly. Therefore, the influence of the first training data which is irrelevant to the current test scene on the point cloud 3D detection model can be gradually reduced in the later training period of the point cloud 3D detection model.
Alternatively, the values of N and N may be preset values. Furthermore, when training the point cloud 3D detection model, the number of the first training data required to be input into the point cloud 3D detection model can be gradually reduced under which iteration rounds through the preset values of N and N, so that the random sampling amount of the first training data can be gradually reduced in the training process of the iteration rounds, and the number of the first training data required to be input into the point cloud 3D detection model can be gradually reduced. Therefore, the influence of the first training data irrelevant to the current test scene on the point cloud 3D detection model can be gradually reduced by gradually reducing the quantity of the input first training data at the later training stage of the point cloud 3D detection model, so that the point cloud 3D detection model is more focused on completing training through the second training data relevant to the current test scene at the later training stage, the generation of the domain clearance problem is effectively avoided, and the accuracy of the point cloud 3D detection model is improved.
It should be noted that, after the n+n iteration rounds, the number of the first training data input into the point cloud 3D detection model may be kept unchanged, for example, the number of the first training data input under the n+n iteration rounds may be kept the same as the number of the first training data input under the n+n iteration rounds, or the number of the first training data input may be continuously reduced, so that the influence of the first training data irrelevant to the current test scene on the point cloud 3D detection model is effectively reduced, so that the trained point cloud 3D detection model has higher accuracy.
Optionally, the second preset number corresponding to the n+nth iteration round is 0. That is, the number of first training data input to the point cloud 3D detection model is 0 at the n+nth iteration round. That is, in the (n+1) -th to (n+n) -th iteration runs, the number of first training data input to the point cloud 3D detection model gradually decreases and until it decreases to 0 as the iteration run increases. Therefore, at the later training stage of the point cloud 3D detection model, the training data only comprise second training data related to the current test scene, but do not comprise first training data unrelated to the current test scene, so that the training of the point cloud 3D detection model is more fit with the current test scene, the generation of domain clearance problem is effectively avoided, the trained point cloud 3D detection model is ensured to have higher accuracy when being applied to the current test scene, and the robustness of the output data of the trained point cloud 3D detection model is improved.
It should be noted that after the n+nth iteration round, the number of the first training data input to the point cloud 3D detection model may still be kept to be 0, so that the point cloud 3D detection model only performs model training through the second training data related to the current test scene in the later training period, and the trained point cloud 3D detection model obtains a better and more robust test effect on the test data.
Optionally, the determining, in the n+1st iteration round to the n+nth iteration round, the second preset number of the first training data to be input to the point cloud 3D detection model under each iteration round includes:
Acquiring the target number of the first training data which needs to be input into the point cloud 3D detection model under the nth iteration round;
determining the number of first training data which are required to be input into the point cloud 3D detection model by a target iteration round according to the target number, the value of N+1 and the value of N+n;
The target iteration round is any one of the (n+1) -th iteration round to the (n+n) -th iteration round.
For example, assuming that the target number of first training data to be input to the point cloud 3D detection model under the nth iteration round is N, the n+1th iteration round is e0, that is, the value of n+1 is represented by e0, and the n+nth iteration round is e1, that is, the value of n+n is represented by e1, the number of first training data to be input to the point cloud 3D detection model under any one of the n+1th iteration round to the n+nth iteration round may be determined according to N, e0 and e1.
In the embodiment of the disclosure, the number of the first training data of the point cloud 3D detection model to be input in any iteration round from the n+1th iteration round to the n+nth iteration round can be determined according to the value of the iteration round and the target number of the first training data of the point cloud 3D detection model to be input in the N iteration round, so that flexible adjustment can be realized on the first training data of the point cloud 3D detection model to be input, and the training process of the point cloud 3D detection model can be controlled better, so that the trained point cloud 3D detection model has better accuracy.
Optionally, in the n+1th to n+nth iteration rounds, the second preset number decreases linearly or decreases in a curve with respect to the iteration rounds. For example, a predetermined algebraic relation may be satisfied between N, e0、e1 and the number of first training data.
Illustratively, in an alternative embodiment, the algebraic relationship satisfied between N, e0、e1 and the number of first training data to be input is as follows:
Wherein f (e) represents the number of first training data to be input to the point cloud 3D detection model under the e-th iteration round (i.e., the second target iteration round), e0 represents the (n+1) -th iteration round, e1 represents the (n+n) -th iteration round, and N represents the target number of first training data to be input to the point cloud 3D detection model under the N-th iteration round. In this case, as shown in fig. 2, in the n+1st to n+nth iteration runs, the number of first training data to be input to the point cloud 3D detection model at each iteration run is linearly decreased with respect to the iteration run.
Or in another alternative embodiment, N, e0、e1 and the number of first training data to be input satisfy the following algebraic relationship:
Wherein f (e) represents the number of first training data to be input to the point cloud 3D detection model under the e-th iteration round (i.e., the second target iteration round), e0 represents the (n+1) -th iteration round, e1 represents the (n+n) -th iteration round, and N represents the target number of first training data to be input to the point cloud 3D detection model under the N-th iteration round. In this case, in the n+1st to n+nth iteration runs, the number of first training data to be input to the point cloud 3D detection model under each iteration run is reduced in a curve with respect to the iteration run.
According to the embodiment of the disclosure, through different algebraic relations, flexible adjustment of the quantity of the first training data needing to be input into the point cloud 3D detection model under each iteration round in the (n+1) th iteration round to the (n+n) th iteration round can be achieved, and the quantity of the first training data input under each iteration round can be ensured to be gradually reduced, so that the training data of the point cloud 3D detection model is gradually shifted to the second training data related to the current test scene at the later training stage of the point cloud 3D detection model, the training of the point cloud 3D detection model is enabled to be more fit with the current test scene, the problem of domain gaps is effectively avoided, and the trained point cloud 3D detection model is ensured to have higher accuracy when being applied to the current test scene.
Referring to fig. 3, fig. 3 is a flowchart of a point cloud detection method according to an embodiment of the disclosure, as shown in fig. 3, the method includes the following steps:
step S301, acquiring point cloud data acquired by a radar.
Alternatively, the radar may be a lidar mounted on an autonomous vehicle. In the embodiment of the disclosure, the point cloud detection method may be applied to an automatic driving vehicle.
Step S302, inputting the point cloud data into a point cloud 3D detection model, and acquiring a 3D detection frame output by the point cloud 3D detection model.
The point cloud 3D detection model is a model obtained after training based on the training method of the point cloud 3D detection model in the above embodiment.
In the embodiment of the disclosure, the point cloud 3D detection model is obtained by training based on the training method, and the point cloud 3D detection model has better and more robust detection effects, so that a 3D detection frame output by the point cloud 3D detection model has higher accuracy, driving of an automatic driving vehicle can be better assisted, and safety of the automatic driving vehicle is effectively improved.
Referring to fig. 4, fig. 4 is one of the block diagrams of a training apparatus for a point cloud 3D detection model according to an embodiment of the present disclosure, as shown in fig. 4, a training apparatus 400 for a point cloud 3D detection model includes:
the first obtaining module 401 is configured to obtain first training data and second training data, where the first training data is point cloud data obtained from a labeled 3D detection frame, and the second training data is point cloud data collected by a radar;
A second obtaining module 402, configured to obtain a target number of the first training data to be input into the point cloud 3D detection model under each iteration round, where the target number is associated with the iteration round of the point cloud 3D detection model;
The training module 403 is configured to input, for any iteration round in the training process of the point cloud 3D detection model, the second training data and the first training data of the target number corresponding to the iteration round into the point cloud 3D detection model, so as to train the point cloud 3D detection model;
the input of the trained point cloud 3D detection model is point cloud data acquired by a radar, and the output is a 3D detection frame.
Optionally, referring to fig. 5, the second obtaining module 402 includes:
A first determining unit 4021, configured to determine, in a case where the iteration round is the first N iteration rounds, that a target number of the first training data to be input to the point cloud 3D detection model under each iteration round is a first preset number;
a second determining unit 4022, configured to determine, in a case where the iteration round is an iteration round after the nth iteration round, that a target number of the first training data to be input to the point cloud 3D detection model under each iteration round is a second preset number;
The first preset number is larger than the second preset number, and the value of N is smaller than the value of the last iteration round of the point cloud 3D detection model.
Optionally, the second determining unit 4022 is further configured to:
Under the condition that the iteration round is an iteration round after the nth iteration round, determining that the values of N are 1,2 and 3 from the (n+1) th iteration round to the (n+n) th iteration round, wherein the value of n+n is smaller than or equal to the value of the last iteration round of the point cloud 3D detection model;
and determining a second preset number of the first training data which needs to be input into the point cloud 3D detection model under each iteration round in the (N+1) -th to (N+n) -th iteration rounds, wherein the second preset number gradually decreases along with the increase of the iteration round value.
Optionally, the second determining unit 4022 is further configured to:
Acquiring the target number of the first training data which needs to be input into the point cloud 3D detection model under the nth iteration round;
determining the number of first training data which are required to be input into the point cloud 3D detection model by a target iteration round according to the target number, the value of N+1 and the value of N+n;
The target iteration round is any one of the (n+1) -th iteration round to the (n+n) -th iteration round.
Optionally, in the n+1th to n+nth iteration rounds, the second preset number decreases linearly or decreases in a curve with respect to the iteration rounds.
Optionally, the second preset number corresponding to the n+nth iteration round is 0.
It should be noted that, the device provided in the embodiment of the present disclosure may implement all the technical processes of the training method of the point cloud 3D detection model described in fig. 1, and may achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
Referring to fig. 6, fig. 6 is a block diagram of a point cloud detection apparatus according to an embodiment of the disclosure, and as shown in fig. 6, the point cloud detection apparatus 600 includes:
A third acquiring module 601, configured to acquire point cloud data acquired by a radar;
A fourth obtaining module 602, configured to input the point cloud data into a point cloud 3D detection model, and obtain a 3D detection frame output by the point cloud 3D detection model;
the point cloud 3D detection model is obtained after training by the training device based on the point cloud 3D detection model.
It should be noted that, the device provided in the embodiment of the present disclosure can implement all the technical processes of the point cloud detection method described in fig. 3, and achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
The embodiment of the disclosure also provides an automatic driving vehicle, which comprises the point cloud detection device, and the automatic driving vehicle provided by the embodiment of the disclosure can obtain a 3D detection frame with higher accuracy by adopting the point cloud detection device, so that the running of the automatic driving vehicle is better assisted, and the safety of the automatic driving vehicle is effectively improved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including an input unit 706, e.g., keyboard, mouse, etc., an output unit 707, e.g., various types of displays, speakers, etc., a storage unit 708, e.g., magnetic disk, optical disk, etc., and a communication unit 709, e.g., network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, a training method of a point cloud 3D detection model or a point cloud detection method. For example, in some embodiments, the training method of the point cloud 3D detection model or the point cloud detection method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the training method of the point cloud 3D detection model or the point cloud detection method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the training method or the point cloud detection method of the point cloud 3D detection model described above by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (16)

Translated fromChinese
1.一种点云3D检测模型的训练方法,包括:1. A method for training a point cloud 3D detection model, comprising:获取第一训练数据和第二训练数据,其中,所述第一训练数据为从标注的3D检测框中获取的点云数据,所述第二训练数据为雷达采集的点云数据;Acquire first training data and second training data, wherein the first training data is point cloud data acquired from a marked 3D detection frame, and the second training data is point cloud data collected by a radar;获取每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量,所述目标数量与所述点云3D检测模型的迭代轮次相关联;Obtaining a target number of the first training data that needs to be input into the point cloud 3D detection model in each iteration round, wherein the target number is associated with the iteration round of the point cloud 3D detection model;针对所述点云3D检测模型训练过程中的任一迭代轮次,将所述第二训练数据及该迭代轮次对应的目标数量的第一训练数据输入所述点云3D检测模型,以对所述点云3D检测模型进行训练;For any iteration round in the training process of the point cloud 3D detection model, input the second training data and the first training data of the target quantity corresponding to the iteration round into the point cloud 3D detection model to train the point cloud 3D detection model;其中,训练后的点云3D检测模型的输入为雷达采集的点云数据,输出为3D检测框;The input of the trained point cloud 3D detection model is the point cloud data collected by the radar, and the output is the 3D detection box;所述获取每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量,包括:The obtaining of a target number of the first training data to be input into the point cloud 3D detection model in each iteration round includes:在所述迭代轮次为前N个迭代轮次的情况下,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量为第一预设数量;When the iteration round is the first N iteration rounds, determining that the target number of the first training data to be input into the point cloud 3D detection model in each iteration round is a first preset number;在所述迭代轮次为第N个迭代轮次之后的迭代轮次的情况下,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量为第二预设数量;In a case where the iteration round is an iteration round after the Nth iteration round, determining that the target number of the first training data to be input into the point cloud 3D detection model in each iteration round is a second preset number;其中,所述第一预设数量大于所述第二预设数量,所述N的取值小于所述点云3D检测模型的最后一个迭代轮次的取值。The first preset number is greater than the second preset number, and the value of N is less than the value of the last iteration round of the point cloud 3D detection model.2.根据权利要求1所述的方法,其中,所述在所述迭代轮次为第N个迭代轮次之后的迭代轮次的情况下,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量为第二预设数量,包括:2. The method according to claim 1, wherein, when the iteration round is an iteration round after the Nth iteration round, determining the target number of the first training data to be input into the point cloud 3D detection model in each iteration round as a second preset number comprises:在所述迭代轮次为第N个迭代轮次之后的迭代轮次的情况下,确定第N+1个迭代轮次至第N+n个迭代轮次,n的取值为1、2、3……n,且N+n的取值小于或等于所述点云3D检测模型的最后一个迭代轮次的取值;In the case where the iteration round is an iteration round after the Nth iteration round, determining the N+1th iteration round to the N+nth iteration round, where the value of n is 1, 2, 3, ... n, and the value of N+n is less than or equal to the value of the last iteration round of the point cloud 3D detection model;在所述第N+1个迭代轮次至第N+n个迭代轮次中,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的第二预设数量,其中,所述第二预设数量随着所述迭代轮次取值的增大而逐渐减小。In the N+1th iteration round to the N+nth iteration round, a second preset number of the first training data required to be input into the point cloud 3D detection model in each iteration round is determined, wherein the second preset number gradually decreases as the value of the iteration round increases.3.根据权利要求2所述的方法,其中,所述在所述第N+1个迭代轮次至第N+n个迭代轮次中,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的第二预设数量,包括:3. The method according to claim 2, wherein, in the N+1th iteration round to the N+nth iteration round, determining the second preset number of the first training data to be input into the point cloud 3D detection model in each iteration round comprises:获取在所述第N个迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量;Obtaining a target number of the first training data that needs to be input into the point cloud 3D detection model in the Nth iteration round;根据所述目标数量、所述N+1的取值以及所述N+n的取值,确定目标迭代轮次需输入所述点云3D检测模型的所述第一训练数据的数量;Determine the number of the first training data required to be input into the point cloud 3D detection model for a target iteration round according to the number of targets, the value of N+1, and the value of N+n;其中,所述目标迭代轮次为所述第N+1个迭代轮次至第N+n个迭代轮次中任一迭代轮次。The target iteration round is any iteration round from the N+1th iteration round to the N+nth iteration round.4.根据权利要求3所述的方法,其中,在所述第N+1个迭代轮次至第N+n个迭代轮次中,所述第二预设数量相对于所述迭代轮次呈直线下降或呈曲线下降。4. The method according to claim 3, wherein, in the N+1th iteration round to the N+nth iteration round, the second preset number decreases linearly or curve-wise relative to the iteration round.5.根据权利要求2所述的方法,其中,所述第N+n个迭代轮次对应的所述第二预设数量为0。5. The method according to claim 2, wherein the second preset number corresponding to the N+nth iteration round is 0.6.一种点云检测方法,包括:6. A point cloud detection method, comprising:获取雷达采集的点云数据;Obtain point cloud data collected by radar;将所述点云数据输入点云3D检测模型,并获取所述点云3D检测模型输出的3D检测框;Input the point cloud data into a point cloud 3D detection model, and obtain a 3D detection frame output by the point cloud 3D detection model;其中,所述点云3D检测模型为基于如权利要求1-5中任一项所述的点云3D检测模型的训练方法训练后得到的模型。The point cloud 3D detection model is a model obtained through training using the point cloud 3D detection model training method described in any one of claims 1 to 5.7.一种点云3D检测模型的训练装置,包括:7. A training device for a point cloud 3D detection model, comprising:第一获取模块,用于获取第一训练数据和第二训练数据,其中,所述第一训练数据为从标注的3D检测框中获取的点云数据,所述第二训练数据为雷达采集的点云数据;A first acquisition module is used to acquire first training data and second training data, wherein the first training data is point cloud data acquired from a marked 3D detection frame, and the second training data is point cloud data collected by a radar;第二获取模块,用于获取每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量,所述目标数量与所述点云3D检测模型的迭代轮次相关联;A second acquisition module is used to acquire a target number of the first training data that needs to be input into the point cloud 3D detection model in each iteration round, wherein the target number is associated with the iteration round of the point cloud 3D detection model;训练模块,用于针对所述点云3D检测模型训练过程中的任一迭代轮次,将所述第二训练数据及该迭代轮次对应的目标数量的第一训练数据输入所述点云3D检测模型,以对所述点云3D检测模型进行训练;A training module, for inputting the second training data and the target number of first training data corresponding to the iteration round into the point cloud 3D detection model for any iteration round in the point cloud 3D detection model training process, so as to train the point cloud 3D detection model;其中,训练后的点云3D检测模型的输入为雷达采集的点云数据,输出为3D检测框;The input of the trained point cloud 3D detection model is the point cloud data collected by the radar, and the output is the 3D detection box;所述第二获取模块包括:The second acquisition module includes:第一确定单元,用于在所述迭代轮次为前N个迭代轮次的情况下,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量为第一预设数量;A first determining unit is used to determine, when the iteration round is the first N iteration rounds, that a target number of the first training data to be input into the point cloud 3D detection model in each iteration round is a first preset number;第二确定单元,用于在所述迭代轮次为第N个迭代轮次之后的迭代轮次的情况下,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量为第二预设数量;A second determining unit is used to determine, when the iteration round is an iteration round after the Nth iteration round, that a target number of the first training data to be input into the point cloud 3D detection model in each iteration round is a second preset number;其中,所述第一预设数量大于所述第二预设数量,所述N的取值小于所述点云3D检测模型的最后一个迭代轮次的取值。The first preset number is greater than the second preset number, and the value of N is less than the value of the last iteration round of the point cloud 3D detection model.8.根据权利要求7所述的装置,其中,所述第二确定单元还用于:8. The apparatus according to claim 7, wherein the second determining unit is further configured to:在所述迭代轮次为第N个迭代轮次之后的迭代轮次的情况下,确定第N+1个迭代轮次至第N+n个迭代轮次,n的取值为1、2、3……n,且N+n的取值小于或等于所述点云3D检测模型的最后一个迭代轮次的取值;In the case where the iteration round is an iteration round after the Nth iteration round, determining the N+1th iteration round to the N+nth iteration round, where the value of n is 1, 2, 3, ... n, and the value of N+n is less than or equal to the value of the last iteration round of the point cloud 3D detection model;在所述第N+1个迭代轮次至第N+n个迭代轮次中,确定每一迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的第二预设数量,其中,所述第二预设数量随着所述迭代轮次取值的增大而逐渐减小。In the N+1th iteration round to the N+nth iteration round, a second preset number of the first training data required to be input into the point cloud 3D detection model in each iteration round is determined, wherein the second preset number gradually decreases as the value of the iteration round increases.9.根据权利要求8所述的装置,其中,所述第二确定单元还用于:9. The apparatus according to claim 8, wherein the second determining unit is further configured to:获取在所述第N个迭代轮次下需输入所述点云3D检测模型的所述第一训练数据的目标数量;Obtaining a target number of the first training data that needs to be input into the point cloud 3D detection model in the Nth iteration round;根据所述目标数量、所述N+1的取值以及所述N+n的取值,确定目标迭代轮次需输入所述点云3D检测模型的所述第一训练数据的数量;Determine the number of the first training data required to be input into the point cloud 3D detection model for a target iteration round according to the number of targets, the value of N+1, and the value of N+n;其中,所述目标迭代轮次为所述第N+1个迭代轮次至第N+n个迭代轮次中任一迭代轮次。The target iteration round is any iteration round from the N+1th iteration round to the N+nth iteration round.10.根据权利要求9所述的装置,其中,在所述第N+1个迭代轮次至第N+n个迭代轮次中,所述第二预设数量相对于所述迭代轮次呈直线下降或呈曲线下降。10 . The device according to claim 9 , wherein, in the N+1th iteration round to the N+nth iteration round, the second preset number decreases linearly or curve-wise relative to the iteration round.11.根据权利要求8所述的装置,其中,所述第N+n个迭代轮次对应的所述第二预设数量为0。11. The device according to claim 8, wherein the second preset number corresponding to the N+nth iteration round is 0.12.一种点云检测装置,包括:12. A point cloud detection device, comprising:第三获取模块,用于获取雷达采集的点云数据;The third acquisition module is used to acquire point cloud data collected by the radar;第四获取模块,用于将所述点云数据输入点云3D检测模型,并获取所述点云3D检测模型输出的3D检测框;A fourth acquisition module, used for inputting the point cloud data into a point cloud 3D detection model, and acquiring a 3D detection frame output by the point cloud 3D detection model;其中,所述点云3D检测模型为基于如权利要求7-11中任一项所述的点云3D检测模型的训练装置训练后得到的模型。The point cloud 3D detection model is a model obtained after training using a point cloud 3D detection model training device as described in any one of claims 7 to 11.13.一种电子设备,包括:13. An electronic device comprising:至少一个处理器;以及at least one processor; and与所述至少一个处理器通信连接的存储器;其中,a memory communicatively connected to the at least one processor; wherein,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6中任一项所述的方法。The memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform the method according to any one of claims 1 to 6.14.一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-6中任一项所述的方法。14. A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause the computer to execute the method according to any one of claims 1 to 6.15.一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-6中任一项所述的方法。15. A computer program product, comprising a computer program, which, when executed by a processor, implements the method according to any one of claims 1 to 6.16.一种自动驾驶车辆,包括权利要求12所述的点云检测装置。16. An autonomous driving vehicle, comprising the point cloud detection device according to claim 12.
CN202310340658.6A2023-03-312023-03-31Training method of point cloud 3D detection model and point cloud detection methodActiveCN116386026B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310340658.6ACN116386026B (en)2023-03-312023-03-31Training method of point cloud 3D detection model and point cloud detection method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310340658.6ACN116386026B (en)2023-03-312023-03-31Training method of point cloud 3D detection model and point cloud detection method

Publications (2)

Publication NumberPublication Date
CN116386026A CN116386026A (en)2023-07-04
CN116386026Btrue CN116386026B (en)2025-07-15

Family

ID=86970580

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310340658.6AActiveCN116386026B (en)2023-03-312023-03-31Training method of point cloud 3D detection model and point cloud detection method

Country Status (1)

CountryLink
CN (1)CN116386026B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118314566A (en)*2024-03-212024-07-09北京百度网讯科技有限公司 Point cloud 3D detection model training method, point cloud 3D detection method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113674421A (en)*2021-08-252021-11-19北京百度网讯科技有限公司 3D target detection method, model training method, related device and electronic equipment
CN114332845A (en)*2020-09-292022-04-12华为技术有限公司 A method and device for 3D target detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10997786B2 (en)*2017-08-072021-05-04Verizon Patent And Licensing Inc.Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
CN115035359A (en)*2021-02-242022-09-09华为技术有限公司 A point cloud data processing method, training data processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114332845A (en)*2020-09-292022-04-12华为技术有限公司 A method and device for 3D target detection
CN113674421A (en)*2021-08-252021-11-19北京百度网讯科技有限公司 3D target detection method, model training method, related device and electronic equipment

Also Published As

Publication numberPublication date
CN116386026A (en)2023-07-04

Similar Documents

PublicationPublication DateTitle
CN114296083B (en) Radar point cloud data processing method, device, equipment and storage medium
CN114549612A (en)Model training and image processing method, device, equipment and storage medium
CN111739005A (en) Image detection method, device, electronic device and storage medium
CN113870334A (en) Depth detection method, device, device and storage medium
CN114312843B (en)Method and device for determining information
CN113705628A (en)Method and device for determining pre-training model, electronic equipment and storage medium
CN112862017A (en)Point cloud data labeling method, device, equipment and medium
CN113920273A (en)Image processing method, image processing device, electronic equipment and storage medium
CN116386026B (en)Training method of point cloud 3D detection model and point cloud detection method
CN111833391B (en)Image depth information estimation method and device
CN114647816B (en) Lane determination method, device, equipment and storage medium
CN114219907B (en)Three-dimensional map generation method, device, equipment and storage medium
CN113361379B (en)Method and device for generating target detection system and detecting target
CN114328785B (en) Method and device for extracting road information
CN113313049A (en)Method, device, equipment, storage medium and computer program product for determining hyper-parameters
CN113361719A (en)Incremental learning method based on image processing model and image processing method
CN116091824B (en) Fine-tuning method for vehicle classification model, vehicle classification method, device and equipment
CN113239899B (en)Method for processing image and generating convolution kernel, road side equipment and cloud control platform
CN117261880A (en)Vehicle control method, device, equipment and storage medium
CN113516013B (en)Target detection method, target detection device, electronic equipment, road side equipment and cloud control platform
CN114092874A (en) Target detection model training method, target detection method and related equipment
CN115320642A (en)Lane line modeling method and device, electronic equipment and automatic driving vehicle
CN113344200A (en)Method for training separable convolutional network, road side equipment and cloud control platform
CN114066980A (en)Object detection method and device, electronic equipment and automatic driving vehicle
CN113361575A (en)Model training method and device and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp