Disclosure of Invention
Based on the above, it is necessary to address the above technical problems, and the embodiments of the present invention provide a method, an apparatus, and a control method for generating a scene of automatic driving of a vehicle, which timely adjusts and generates a driving scene that is currently applicable by external real-time road condition information, and considers environmental change effects faced in practical applications in the automatic driving process, so as to solve the technical problem that the current automatic driving safety is difficult to guarantee.
In one aspect, a method for generating a scene for automatic driving of a vehicle is provided, the method comprising:
Planning a driving route on an electronic map according to the preset starting point and the preset end point of the vehicle;
sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and connecting the whole-course roads in sections according to the road types;
Extracting corresponding scenes according to road types of each section of road in the field Jing Ku by combining real-time road condition information, fitting to form a first driving scene, and
And updating the first driving scene according to real-time road condition information to form a second driving scene in the driving process of the vehicle according to the first driving scene.
In one embodiment, before the corresponding scene is extracted according to the road type of each section of road in the field Jing Ku, a scene library is constructed, the scene library comprises a plurality of scenes arranged according to the road type, the road type comprises a fast road, a main road, a secondary road and a branch road, each road is provided with attributes, structures, geometry, road network connection and lane information, the attributes comprise length, height, speed limit and height limit, the structures comprise horizontal roads, ascending roads, descending roads and arch bridges, the structures comprise straight lines, curve lines, circular arcs and spiral lines, the road network connection comprises road head and road tail, the lane information comprises lane types, lane attributes, lane lines, lane starting points and pavement, the types of the scenes comprise a basic rule scene, a primary packet, a medium-level packet and a high-level packet, the basic rule scene comprises a standard scene, the primary packet comprises a standard scene and natural driving data reconstruction case, the medium-level packet comprises a standard scene, a natural driving data reconstruction case and an accident rule scene, the high-level driving rule scene comprises an accident reconstruction case and an automatic driving rule case.
In one embodiment, the lane line comprises a road center line, the lane type comprises a left-turn lane, a right-turn lane, a straight lane and a turning lane, the lane attribute comprises a speed limit condition and a vehicle type limit condition, and the pavement comprises lane flatness.
In one embodiment, the fitting forms a first driving scenario comprising:
Dividing the number of lanes of each section of road according to the lane lines, numbering each divided lane, determining a candidate lane number for vehicles to travel according to the lane types, determining an available lane corresponding to the vehicle type from the candidate lane numbers according to the lane attributes, and acquiring the highest traveling speed of the vehicles on each available lane based on the lane attributes and pavement of the available lane;
acquiring the crowding degree and the visible range of the vehicle;
The method comprises the steps of calculating the actual running speed of a vehicle on each available lane according to the congestion degree of the vehicle and the highest running speed on each available lane, simultaneously extracting the optimal scene corresponding to each section of road by combining the visible range, and splicing all the optimal scenes to form the first running scene.
In one embodiment, the extracting the optimal scene corresponding to each section of road in combination with the visible range includes:
determining the types of selectable scenes according to the size of the visible range;
Wherein the maximum value of the visible range is compared with a first threshold value, a second threshold value and a third threshold value which are sequentially increased,
Selecting the base packet when the visible range is greater than or equal to a third threshold;
selecting the primary packet when the visible range is greater than or equal to a second threshold and less than a third threshold;
Selecting the medium-level packet when the visible range is greater than or equal to a first threshold and less than a second threshold;
The premium package is selected when the visible range is less than a first threshold.
In one embodiment, when constructing a scene library, the method comprises:
A data acquisition step of acquiring lane information, nearby vehicle information, weather information, vehicle type information, vehicle speed information and lane occupation information during vehicle journey;
a data fusion step, namely fusing the acquired data to form a plurality of available scenes;
a scene extraction step of extracting each available scene;
a scene labeling step, namely labeling the extracted scenes one by one in a usable situation;
a scene analysis step of verifying the available situation and analyzing whether the corresponding scene is reasonable, and
And a scene construction step, when the scenes are reasonable, classifying the scenes into at least one of a basic packet, a primary packet, a medium packet and a high-level packet according to the complexity of each scene.
In one embodiment, the updating the first driving scene to form the second driving scene according to real-time road condition information during the driving process of the vehicle includes:
the method comprises the steps that a vehicle and a nearby vehicle are modularized, and the vehicle acquires running data of the nearby vehicle in real time;
judging whether the first driving scene of the vehicle conflicts with the driving data of the nearby vehicle, if not, keeping the speed and the driving lane of the vehicle; if yes, the speed and/or the driving lane of the vehicle are/is adjusted, and meanwhile the first driving scene is updated to form a second driving scene.
In one embodiment, the driving data of the nearby vehicle comprises lane change track data of the nearby vehicle, wherein the lane change track data of the nearby vehicle comprises a current lane of the vehicle, a lane after the vehicle is changed, a vehicle speed, a vehicle lane change starting point and ending point, a vehicle transverse speed and a vehicle lane change duration.
In another aspect, there is provided a scene generating apparatus for automatic driving of a vehicle, the apparatus comprising:
The driving route planning module is used for planning a driving route on the electronic map according to the preset starting point and the preset end point of the vehicle;
the road segmentation module is used for sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicle and connecting the whole-course roads in segments according to the road type;
a first driving scene module is formed and used for combining real-time road condition information, extracting corresponding scenes in the scene Jing Ku according to the road types of each section of road, and fitting to form a first driving scene;
And the second driving scene forming module is used for updating the first driving scene according to real-time road condition information to form a second driving scene in the driving process of the vehicle according to the first driving scene.
In still another aspect, a method for controlling automatic driving of a vehicle is provided, which includes the steps of:
Acquiring the first driving scene formed by the vehicle automatic driving scene generation method;
according to the planned driving route of the vehicle, combining real-time road condition information to travel according to the first driving scene, and controlling the speed and/or driving lane of the vehicle;
And updating the first driving scene according to real-time road condition information to form a second driving scene in the driving process of the vehicle, and controlling the speed and/or driving lane of the vehicle according to the second driving scene.
The method, the device and the control method for generating the scene of the automatic driving of the vehicle are characterized in that the first driving scene is updated in time based on the real-time road condition information of the outside to form the second driving scene, the updating of the second driving scene can be realized by finely adjusting and controlling the vehicle on the basis of the first driving scene, the preset scene library is corrected by continuously combining the real-time road condition information in the automatic driving process, the authenticity of the automatic driving scene is improved, the safety of the automatic driving is improved, the fitted second driving scene comprises the scenes in the scene library, and the data analysis and selection mode is more scientific and reasonable than the manual control mode, so that the safety of the automatic driving is further ensured.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The method for generating the scene of the automatic driving of the vehicle can be realized in an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 is a vehicle-mounted control system or a control device connected with the vehicle-mounted control system, and can control automatic driving of the vehicle, and the server 104 is cloud service for issuing control instructions to the terminal 102 to realize automatic driving of the vehicle. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, and the server 104 may be implemented by a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a method for generating a scene of automatic driving of a vehicle is provided, and the method is applied to the server 104 in fig. 1 for illustration, and includes the following steps S1-S5.
Step S1, planning a driving route on an electronic map according to preset starting and ending positions of the vehicle.
And S2, sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and connecting the whole-course roads in sections according to the road types.
And S3, constructing a scene library, wherein the scene library comprises a plurality of scenes set according to road types.
And S4, extracting corresponding scenes in the field Jing Ku according to the road types of each section of road by combining the real-time road condition information, and fitting to form a first driving scene.
And S5, updating the first driving scene according to the real-time road condition information to form a second driving scene in the driving process of the vehicle according to the first driving scene.
The road type comprises a expressway, a trunk road, a secondary trunk road and a branch road, each road is provided with attributes, structures, geometry, road network connection and lane information, the attributes comprise length, height, speed limit and height limit, the structures comprise a horizontal road, an ascending road, a descending road and an arch bridge road, the geometries comprise linear, curve-shaped, arc-shaped and spiral lines, the road network connection comprises a road head and a road tail, the lane information comprises a lane type, lane attributes, lane lines, lane starting points and pavement, the types of the scenes comprise a basic packet, a primary packet, a middle packet and a high-level packet, the basic packet only comprises a standard regulation scene, the primary packet comprises a standard regulation scene and a natural driving data reconstruction case, the middle packet comprises a standard regulation scene, a natural driving data reconstruction case and an accident scene, and the high-level packet comprises a standard scene, a natural driving data reconstruction case, an accident scene and an automatic driving test failure scene.
In the method for generating the scene of the automatic driving of the vehicle, the lane line comprises a road center line, the lane type comprises a left-turn lane, a right-turn lane, a straight lane and a turning lane, the lane attribute comprises a lane material, a speed limiting condition and a vehicle type, and the pavement comprises information such as lane flatness.
In this embodiment, the fitting forms a first driving scenario, including:
Step S41, dividing the number of lanes of each section of road according to the lane lines, numbering each divided lane, determining a candidate lane number for the vehicle to travel according to the lane type, determining an available lane corresponding to the vehicle type from the candidate lane numbers according to the lane attribute, and acquiring the highest traveling speed of the vehicle on each available lane based on the lane attribute and pavement of the available lane.
Step S42, combining the real-time road condition information to obtain the vehicle congestion degree and the visible range.
The vehicle congestion degree and the visible range can be acquired in real time through a sensing system arranged on the vehicle.
Step S43, calculating the actual running speed of the vehicle on each available lane according to the vehicle congestion degree and the highest running speed on each available lane, simultaneously extracting the optimal scene corresponding to each section of road by combining the visible range, and splicing all the optimal scenes to form the first running scene.
In this embodiment, when the degree of congestion of the current driving road section of the vehicle detected by the sensing system is high, the driving speed of the vehicle may be reduced appropriately to ensure safety in the driving process.
As shown in fig. 3, in step S43, the extracting the optimal scene corresponding to each section of road in combination with the visible range includes:
step S431, determining the types of the selectable scenes according to the size of the visible range.
Step S432, comparing the maximum value of the visible range with a first threshold value, a second threshold value and a third threshold value which are sequentially increased, selecting the basic packet when the visible range is larger than or equal to the third threshold value, selecting the primary packet when the visible range is larger than or equal to the second threshold value and smaller than the third threshold value, selecting the intermediate packet when the visible range is larger than or equal to the first threshold value and smaller than the second threshold value, and selecting the high-level packet when the visible range is smaller than the first threshold value.
In the embodiment, the visible range in the actual driving environment is considered when the complexity of the automatic driving scene is selected, and the method further ensures the safety in the automatic driving process.
As shown in fig. 4, in step S3 of the present embodiment, when constructing a scene library, the method includes:
Step S31, a data acquisition step, namely acquiring lane information, nearby vehicle information, weather information, vehicle type information, vehicle speed information and lane occupation information during vehicle journey, wherein the lane occupation information comprises lane occupation conditions such as fault vehicles, maintenance areas, traffic jam and parking.
And step S32, a data fusion step, wherein the acquired data are fused to form a plurality of available scenes.
Step S33, a scene extraction step, extracting each available scene.
Step S34, a scene labeling step, namely labeling the extracted scenes one by one in a usable situation.
Step S35, a scene analysis step, which is to verify the available situation and analyze whether the corresponding scene is reasonable.
Step S36, scene construction step, classifying into at least one of basic package, primary package, medium package and high package according to the complexity of each scene when the scene is reasonable.
As shown in fig. 5, in step S5 of the present embodiment, the vehicle updates the first driving scenario to form a second driving scenario according to real-time road condition information during the traveling process, including:
Step S51, the vehicle and the nearby vehicles are modularized, and the vehicle acquires the running data of the nearby vehicles in real time;
Step S52, judging whether the first running scene of the vehicle conflicts with the running data of the nearby vehicle, if not, keeping the speed and the running lane of the vehicle, and if so, adjusting the speed and/or the running lane of the vehicle, and updating the first running scene to form a second running scene.
In this embodiment, the driving data of the nearby vehicle includes lane change track data (also referred to as a cut-in scene) of the nearby vehicle, where the lane change track data of the nearby vehicle includes a current lane of the vehicle, a lane after the vehicle is changed, a vehicle speed, a start point and an end point of a lane change of the vehicle, a lateral speed of the vehicle, and a lane change duration of the vehicle.
It should be understood that, although the steps in the flowcharts of fig. 2-5 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or steps.
In one embodiment, as shown in fig. 6, a scene generating device 10 for automatic driving of a vehicle is provided, which comprises a planned driving route module 1, a road segmentation module 2, a first driving scene forming module 3 and a second driving scene forming module 4.
The driving route planning module 1 is used for planning a driving route on an electronic map according to preset starting and ending positions of the vehicle.
The road segmentation module 2 is used for sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and connecting the whole-course roads in segments according to the road type.
The first driving scene forming module 3 is configured to combine real-time road condition information, extract a corresponding scene in the field Jing Ku according to the road type of each section of road, and form a first driving scene by fitting;
The second driving scene forming module 4 is configured to update the first driving scene according to real-time road condition information to form a second driving scene in the driving process of the vehicle according to the first driving scene.
The specific definition of the scenario generation apparatus for vehicle automatic driving may be referred to the definition of the scenario generation method for vehicle automatic driving hereinabove, and will not be described herein. The above-described respective modules in the scene generating device for vehicle automatic driving may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
As shown in fig. 7, in one embodiment, there is provided a control method for automatic driving of a vehicle, including the steps of:
step S11, acquiring the first driving scene formed by the automatic driving scene generation method of the vehicle;
Step S12, traveling according to the first traveling scene in combination with real-time road condition information according to the planned traveling route of the vehicle and controlling the speed and/or traveling lane of the vehicle, and
And S13, updating the first driving scene according to real-time road condition information to form a second driving scene in the driving process of the vehicle, and controlling the speed and/or the driving lane of the vehicle according to the second driving scene.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing scenario-generation data for automatic driving of the vehicle. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a scene generation method for automatic driving of a vehicle.
It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
Planning a driving route on an electronic map according to the preset starting point and the preset end point of the vehicle;
sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and connecting the whole-course roads in sections according to the road types;
Extracting corresponding scenes according to road types of each section of road in the field Jing Ku by combining real-time road condition information, and fitting to form a first driving scene;
and updating the first driving scene according to real-time road condition information to form a second driving scene in the driving process of the vehicle according to the first driving scene.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
Planning a driving route on an electronic map according to the preset starting point and the preset end point of the vehicle;
sequentially acquiring the whole-course road type information corresponding to the planned driving route according to the driving sequence of the vehicle, and connecting the whole-course roads in sections according to the road types;
Extracting corresponding scenes according to road types of each section of road in the field Jing Ku by combining real-time road condition information, and fitting to form a first driving scene;
and updating the first driving scene according to real-time road condition information to form a second driving scene in the driving process of the vehicle according to the first driving scene.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.