FIELDThe present disclosure relates to systems and methods for inference-aware motion planning for automated driving of a vehicle.
BACKGROUNDThis section provides background information related to the present disclosure, which is not necessarily prior art.
Motion planning is important for autonomous or self-driving vehicles to provide a collision-free path from a vehicle's current location to its destination. Existing motion planning systems make use of direct information from perception system outputs, such as estimates of location and speed of traffic participants, estimates of road boundaries, etc. This information, combined with a predefined vehicle model, motion primitives, and traffic rules, can be used by the motion planning system to generate an optimal trajectory for the vehicle to follow.
Existing motion planning systems, however, do not account for the implications and inferences about the environment outside of the directly perceived information about the environment and are subject to improvement.
SUMMARYThis section provides a general summary of the disclosure, and is not a comprehensive disclosure of its full scope or all of its features.
The present teachings include systems and methods for inference-aware motion planning for automated driving of a vehicle.
The present teachings include a system comprising a vehicle having at least one vehicle actuation system and at least one vehicle sensor. the at least one vehicle actuation system includes at least one of a steering system, a braking system, and a throttle system, and the at least one vehicle sensor including at least one of a vehicle speed sensor, a vehicle acceleration sensor, an image sensor, a Lidar sensor, a radar sensor, a stereo sensor, an ultrasonic sensor, a global positioning system, and an inertial measurement unit. The system also includes a perception system that generates dynamic obstacles data based on information received from the at least one vehicle sensor, the dynamic obstacles data including at least one of a current location, a size, a current estimated trajectory, a current estimated velocity, and a current estimated acceleration/deceleration of an object. The system also includes a planning system having a global route planner module, an inference module, a motion planner module, and a trajectory follower module. The global route planner module receives an inputted destination and generates a route to the inputted destination. The inference module receives the route from the global route planner module and the dynamic obstacles data from the perception system and determines a total cost for each set of motions of a plurality of sets of motions associated with different trajectories for traveling along the received route. The total cost includes at least one associated cost and an inferred cost for the associated set of motions, the inferred cost being based on a probability of the associated set of motions having an increased or decreased cost based on the dynamic obstacles data. The motion planner module receives the total cost for each set of motions of the plurality of sets of motions, selects a particular set of motions from the plurality of sets of motions based on the total cost for each set of motions, and generates a smooth trajectory for the vehicle. The trajectory follower module controls the at least one vehicle actuation system based on the smooth trajectory.
The present teachings also include a method comprising receiving, with a global route planner module of a planning system of a vehicle, an inputted destination. The vehicle has at least one vehicle actuation system and at least one vehicle sensor. The at least one vehicle actuation system includes at least one of a steering system, a braking system, and a throttle system and the at least one vehicle sensor includes at least one of a vehicle speed sensor, a vehicle acceleration sensor, an image sensor, a Lidar sensor, a radar sensor, a stereo sensor, an ultrasonic sensor, a global positioning system, and an inertial measurement unit. The method also includes generating, with the global route planner module, a route to the inputted destination. The method also includes generating, with a perception system, dynamic obstacles data based on information received from the at least one vehicle sensor, the dynamic obstacles data including at least one of a current location, a size, a current estimated trajectory, a current estimated velocity, and a current estimated acceleration/deceleration of an object. The method also includes receiving, with an inference module of the planning system, the route from the global route planner module and the dynamic obstacles data from the perception system. The method also includes determining, with the inference module, a total cost for each set of motions of a plurality of sets of motions associated with different trajectories for traveling along the received route, the total cost including at least one associated cost and an inferred cost, the inferred cost being based on a probability of the set of motions having an increased or decreased cost based on the dynamic obstacles data. The method also includes receiving, with a motion planner module of the planning system, the total cost for each set of motions. The method also includes selecting, with the motion planner module, a particular set of motions from the plurality of sets of motions based on the total cost for each set of motions. The method also includes generating, with the motion planner module, a smooth trajectory for the vehicle based on the particular set of motions. The method also includes controlling, with a trajectory follower module of the planning system, the at least one vehicle actuation system based on the smooth trajectory.
The present teachings include a system comprising a vehicle having a plurality of vehicle actuation systems and a plurality of vehicle sensors. The plurality of vehicle actuation systems includes a steering system, a braking system, and a throttle system and the plurality of vehicle sensors includes a global positioning system and inertial measurement unit and at least one of a vehicle speed sensor, a vehicle acceleration sensor, an image sensor, a Lidar sensor, a radar sensor, a stereo sensor, and an ultrasonic sensor. The system also includes a map database storing map data for a geographic area in which the vehicle is traveling. The system also includes a vehicle information database storing vehicle information indicating at least one of a vehicle model, a vehicle size, a vehicle wheelbase, a vehicle mass, and a vehicle turning radius of the vehicle. The system also includes a motion primitive database storing a listing of motion primitives, each corresponding to a discretized smooth path that can be traversed by the vehicle over a predetermined time interval. The system also includes a traffic rules database storing traffic rules associated with the geographic area in which the vehicle is traveling. The system also includes a communication system configured to communicate with at least one other vehicle and receive information related to at least one of a warning of an accident, a driving hazard, an obstacle, a traffic pattern, a location of the at least one other vehicle, a traffic signal location, and a traffic signal timing. The system also includes a perception system configured to generate dynamic obstacles data, static obstacles data, and road geometry data based on information received from the plurality of vehicle sensors, the dynamic obstacles data including at least one of a current location, a size, a current estimated trajectory, a current estimated velocity, and a current estimated acceleration/deceleration of an object, the static obstacles data including information about static obstacles, and the road geometry data including information about a road that the vehicle is traveling on. The system also includes a planning system having a global route planner module, an inference module, a motion planner module, and a trajectory follower module. The global route planner module receives an inputted destination and generates a route to the inputted destination based on the map data from the map database and based on traffic information from the global positioning system and inertial measurement unit. The perception system generates localization/inertial data corresponding to a current location and orientation of the vehicle based on information received from the plurality of vehicle sensors. The inference module receives the route from the global route planner module, the dynamic obstacles data from the perception system, and the information from the communication system, and determines a total cost for each set of motions of a plurality of sets of motions associated with different trajectories for traveling along the route based on at least one associated cost and an inferred cost for each set of motions, the inferred cost being based on a probability of the associated set of motions having an increased or decreased cost based on the dynamic obstacles data and based on the information from the communication system. The planning system (i) receives the total cost for each set of motions of the plurality of motions, the map data from the map database, the vehicle information from the vehicle information database, the listing of motion primitives from the motion primitive database, the traffic rules from the traffic rules database, the information from the communication system, the dynamic obstacles data from the perception system, the static obstacles data from the perception system, the road geometry data from the perception system, and localization/inertial data from the perception system, (ii) selects a particular set of motions from the plurality of sets of motions based on the total cost for each set of motions, and (iii) generates a smooth trajectory for the vehicle based on the particular set of motions. The trajectory follower module controls the plurality of vehicle actuation system based on the smooth trajectory.
Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
DRAWINGSThe drawings described herein are for illustrative purposes only of select embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
FIG. 1 illustrates a subject vehicle with a system for inference-aware motion planning according to the present teachings.
FIG. 2 illustrates a block diagram of the system for inference-aware motion planning according to the present teachings.
FIG. 3 illustrates a traffic scenario with a vehicle using the system for inference-aware motion planning according to the present teachings.
FIG. 4 illustrates another traffic scenario with a vehicle using the system for inference-aware motion planning according to the present teachings.
FIG. 5 illustrates a parking scenario with a vehicle using the system for inference-aware motion planning according to the present teachings.
FIG. 6 illustrates a flow diagram for a method of inference-aware motion planning according to the present teachings.
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.
DETAILED DESCRIPTIONExample embodiments will now be described more fully with reference to the accompanying drawings.
To address the above issues with conventional approaches, the present teachings include an inference-aware motion planning system that receives dynamic information about the surrounding environment of an autonomous or self-driving vehicle, including dynamically changing information about moving and/or moveable objects, such as other vehicles, pedestrians, obstacles, etc., and adjusts the total cost associated with different possible sets of motions associated with different trajectories for traveling along a route path based on calculated probabilities that the dynamic information is indicating an increased or decreased cost of the individual sets of motions associated with the different trajectories. For example, a route determined by a global route planner may include a number of different ways to arrive at a destination. For example, the route may include a road with multiple lanes. As discussed in further detail below, the associated cost for different sets of motions associated with the different trajectories (such as, for example, traveling in the current lane, changing lanes to the left, changing lanes to the right, changing lanes to the left, etc.) may include a value corresponding to the cost-to-goal for the associated set of motions, a value corresponding to the sum of all action-costs for the set of motions, a value corresponding to a collision-cost for the associated set of motions, etc. An inference module may account for the implications of the dynamic information by adjusting a total cost of a particular set of motions associated with a particular trajectory based on the dynamic information and, more specifically, based on probability that the dynamic information is indicating an increased or decreased cost for the associated set of motions due to, for example, an object lying in one of the roadway lanes, traffic being faster or slower in one of the roadway lanes, etc.
For example, as discussed in further detail below, if the dynamic information indicates that a vehicle traveling in the same lane ahead of the self-driving vehicle changes lanes to the left, this dynamic information (i.e., that the preceding vehicle changed lanes) could infer or imply that there is an obstacle (such as a tire, debris, a disabled vehicle, an animal, etc.) located in the self-driving vehicle's current lane. In such case, the inference module may increase the cost associated with the staying in the current lane of traffic. For further example, if the dynamic information indicates that a second vehicle traveling in the same lane ahead of the self-driving vehicle also changes lanes to the left, the inference module may again increase the cost associated with the current lane of traffic based on the increased probability that there is an obstacle located in the self-driving vehicle's current lane. Based on the increased cost associated with staying in the current lane of traffic, the self-driving vehicle may then change lanes due to the set of motions for changing lanes having a lower associated cost than the set of motions for staying in the current lane of traffic. For example, similar to the preceding vehicles, the self-driving vehicle may change lanes to the left and may ultimately avoid an obstacle located in the self-driving vehicle's current lane despite not having directly observed or sensed the obstacle with a perception system of the self-driving vehicle. In this way, the inference-aware motion planning system may result in more human-like driving behavior of the self-driving vehicle.
With reference toFIG. 1, a self-drivingvehicle10 is illustrated. Although the self-drivingvehicle10 is illustrated as an automobile inFIG. 1, the present teachings apply to any other suitable vehicle, such as a sport utility vehicle (SUV), a mass transit vehicle (such as a bus), or a military vehicle, as examples. The self-drivingvehicle10 includes asteering system12 for steering the self-drivingvehicle10, athrottle system14 for accelerating and propelling the self-drivingvehicle10, and abraking system16 for decelerating and stopping the self-drivingvehicle10. With additional reference toFIG. 2, thesteering system12, thethrottle system14, and thebraking system16 are grouped asvehicle actuation systems66. As discussed in further detail below, thevehicle actuation systems66, including thesteering system12, thethrottle system14, and thebraking system16, can be operated by atrajectory follower module62 of aplanning system20 to drive the self-drivingvehicle10. As discussed in further detail below, theplanning system20 determines an optimal smooth trajectory for the self-drivingvehicle10 based on input from aperception system50.
With reference toFIGS. 1 and 2, the self-drivingvehicle10 includesvehicle sensors22, which can include a global positioning system GPS (GPS) and inertial measurement unit (GPS/IMU) that determines location and inertial/orientation data of the self-drivingvehicle10. Thevehicle sensors22 can also include a vehicle speed sensor that generates data indicating a current speed of the self-drivingvehicle10 and a vehicle acceleration sensor that generates data indicating a current rate of acceleration or deceleration of the self-drivingvehicle10.
Thevehicle sensors22 of the self-drivingvehicle10 may also include a number of environmental sensors to sense information about the surroundings of the self-drivingvehicle10. For example, thevehicle sensors22 may include an image sensor, such as a camera, mounted to a roof of the self-drivingvehicle10. The self-drivingvehicle10 may be equipped with additional image sensors at other locations on or around the self-drivingvehicle10. Additionally, the self-drivingvehicle10 may be equipped with one or more front sensors located near a front bumper of the self-drivingvehicle10, one or more side sensors located on side mirrors or side doors of the self-drivingvehicle10, and/or one or more rear sensors located on a rear bumper of the self-drivingvehicle10. The front sensors, side sensors, and rear sensors may be, for example, image sensors (i.e., cameras), Lidar sensors, stereo sensors, radar sensors, ultrasonic sensors, or other sensors for detecting information about the surroundings of the self-drivingvehicle10, including, for example, other vehicles, lane lines, guard rails, objects in the roadway, buildings, pedestrians, etc., Additional environmental sensors may be located on or around the self-drivingvehicle10. Thevehicle sensors22 may also include sensors to determine the light level of the environment (i.e., whether it is daytime or nighttime), to determine or receive weather data (i.e., whether it is a sunny day, raining, cloudy, etc.), to determine the current temperature, to determine the road surface status (i.e., dry, wet, frozen, number of lanes, types of lane marks, concrete surface, asphalt surface, etc.), to determine traffic conditions for the current path or route of the self-drivingvehicle10, and/or other applicable environmental information. Theperception system50 receives data about the surroundings of the self-drivingvehicle10 from thevehicle sensors22 and uses the received data to generate environmental data used by theplanning system20 to determine a trajectory for driving the self-drivingvehicle10, as discussed in further detail below. Additionally, theperception system50 can determine localization data for the self-drivingvehicle10 based on image data collected from cameras, Lidar sensors, stereo sensors, radar sensors, ultrasonic sensors, or other sensors of the self-drivingvehicle10. For example, the GPS/IMU may determine a location of the self-drivingvehicle10 but may not provide sufficient specificity to determine the exact orientation, position, and specific location of the self-drivingvehicle10 at that GPS location. Theperception system50 can determine localization data, including an exact orientation, position, and specific location, for the self-drivingvehicle10 based on the GPS/IMU data from the GPS/IMU and based on environmental data received from theother vehicle sensors22, such as the cameras, Lidar sensors, stereo sensors, radar sensors, ultrasonic sensors, other sensors, etc.
The self-drivingvehicle10 may optionally include a vehicle-to-vehicle and vehicle-to-infrastructure (collectively referred to as V2X)system24 capable of dedicated short range communication with other vehicles and with infrastructure (such as a cloud computing device, a building, traffic signals, etc.) locations that are also equipped with V2X communication systems. For example, theV2X system24 may be configured to transmit and receive signals representing, for example, early warnings of accidents, driving hazards, obstacles ahead, traffic patterns, vehicle locations, traffic signal locations and timing, etc. to/from remote vehicles that are also equipped with V2X systems and/or to/from infrastructure communication locations equipped with a V2X system.
The self-drivingvehicle10 also includes amap database26, avehicle information database30, a motionprimitive database32, and atraffic rules database34. Themap database26 includes map data for a geographic area in which the self-drivingvehicle10 is traveling. Thevehicle information database30 includes data associated with the self-drivingvehicle10, such as the vehicle model, size, wheelbase, mass, turning radius, etc. The motionprimitive database32 includes a listing of possible motion primitives, which are discrete smooth paths that can be traversed by the self-drivingvehicle10 over a short discrete time interval. Thetraffic rules database34 includes traffic rules data indicating a listing of traffic rules (e.g., rules with respect to speed limits, rules with respect to the turns and/or maneuvers that are allowed or prohibited, etc.,) associated with the area in which the self-drivingvehicle10 is traveling.
With reference toFIG. 2, certain components of the self-drivingvehicle10 are shown, along with additional details of theplanning system20. InFIG. 2, for reference, thesteering system12, thethrottle system14, and thebraking system16 are grouped asvehicle actuation systems66. For further reference, similar toFIG. 1, the various sensors discussed above are shown asvehicle sensors22. As shown inFIG. 2, theplanning system20 is in communication with and receives data from themap database26, thevehicle information database30, the motionprimitive database32, and thetraffic rules database34. If the self-drivingvehicle10 includes aV2X system24, theplanning system20 is also in communication with theV2X system24.
As shown inFIG. 2, theplanning system20 includes a globalroute planner module42, aninference module46, amotion planner module48, and atrajectory follower module62. As discussed in further detail below, theplanning system20 receivesdynamic obstacles data56,static obstacles data58,road geometry data60, and localization/inertial data54 from theperception system50.
The globalroute planner module42, for example, receivesdestination input40 indicating a destination location for the self-drivingvehicle10. Thedestination input40 can be received via voice or text input from an operator of the self-drivingvehicle10 or can be received remotely from a remote computing device. The globalroute planner module42 can receivetraffic information data44 from the vehicle sensors, such as from a GPS or other communication system. Thetraffic information data44 may include data indicating traffic conditions in the geographic area of the self-drivingvehicle10 and indicating traffic conditions along one or more routes from the current location of the self-drivingvehicle10 to the inputted destination. Additionally or alternatively, if the self-drivingvehicle10 includes aV2X system24, the globalroute planner module42 may receive thetraffic information data44 from theV2X system24 based on communication with other vehicles, a cloud computing device, and/or infrastructure locations that are also equipped with a V2X system. The globalroute planner module42 also receives map data about the surroundings of the self-drivingvehicle10 and about various possible routes from the current location of the self-driving vehicle to the inputted destination from themap database26. Additionally or alternatively, the globalroute planner module42 can receive the map data from a remote computing device in communication with theplanning system20.
The globalroute planner module42 can receive current localization/inertial data54 of the self-drivingvehicle10 from theperception system50. The localization/inertial data54 may include, for example, a current position of the self-drivingvehicle10 within the surrounding environment, as well as a direction, a velocity, and/or an acceleration or deceleration in the current direction. For example, theperception system50 can determine or estimate a current state of the self-drivingvehicle10, including localization/inertial data54, based on data from thevehicle actuation systems66 and/or thevehicle sensors22 of the self-drivingvehicle10. For example, theperception system50 can determine and/or receive location and inertial data indicated by the GPS/IMU, the current steering angle indicated by thesteering system12, the current position of the throttle indicated by thethrottle system14, the current position of the brake indicated by thebraking system16, the current speed of the vehicle indicated by the vehicle speed sensor, and/or the current acceleration/deceleration of the vehicle indicated by the vehicle acceleration sensor. Based on the data received from thevehicle sensors22 and thevehicle actuation systems66, theperception system50 can generate and output the localization/inertial data54 to theplanning system20, including the globalroute planner module42 of theplanning system20.
The globalroute planner module42 can determine a route to the inputted destination based on the current location of the self-drivingvehicle10 as indicated by the localization/inertial data54, based on the map data from themap database26, and based on thetraffic information data44. The globalroute planner module42, for example, can use conventional route planning to determine and output the route to the inputted destination, including a series of route segments for the self-drivingvehicle10 to follow and any potential alternative route segments. For example, the globalroute planner module42 may determine a shortest distance route to the inputted destination from the current location or a shortest time of travel to the inputted destination from the current location and can output one or more series of road segments.
In a traditional motion planning system, the route from the globalroute planner module42 is outputted to a traditional motion planner module. The traditional motion planner module then outputs a feasible trajectory for the vehicle to follow, based on the route from the global route planner module and based on information about the vehicle and directly sensed data about the vehicle's surroundings.
In the inference-aware motion planning system of the present teachings, however, the route from the globalroute planner module42 is instead inputted to aninference module46 that receives dynamic information about the surroundings of the vehicles and can selectively adjust one or more costs for the different sets of possible motions associated with the different trajectories for traveling along the route based on the dynamic information about the surroundings of the vehicle. For example, theinference module46 can receivedynamic obstacles data56 from theperception system50, which receives data from thevehicle sensors22, including, for example, the vehicle speed sensor, the vehicle acceleration sensor, the image sensor(s), front sensor(s), the side sensor(s), and/or the rear sensor(s), etc. Based on the data from thevehicle sensors22, theperception system50 determines information about the surroundings of the self-drivingvehicle10. As shown inFIG. 2, based on data from thevehicle sensors22, theperception system50 can outputdynamic obstacles data56 corresponding to information about dynamic (i.e., moving and/or movable) objects in the surrounding area of the self-drivingvehicle10. Thedynamic obstacles data56, for example, may include data indicating a current location, a size, a current estimated trajectory, a current estimated velocity, and/or a current estimated acceleration/deceleration of the dynamic object, etc. For further example, the dynamic obstacles data may include data about surrounding vehicles, pedestrians, animals, moveable objects in or around the roadway, etc. Theperception system50 can also determinestatic obstacles data58, including information, such as location and size of non-moveable objects, such as guardrails, dividers, barriers, medians, light poles, buildings, etc. Theperception system50 can also determineroad geometry data60 indicating information about the roadway, such as the location and type of lane lines, shoulders, etc. of the roadway.
If aV2X system24 is present in the self-drivingvehicle10, theinference module46 can also receive information from theV2X system24 about dynamic objects and/or conditions in the surrounding area of the self-driving vehicle. For example, as discussed above, theV2X system24 can communicate data about early warnings of accidents, driving hazards, obstacles ahead, traffic patterns, vehicle locations, traffic signal locations and timing, etc. to/from remote vehicles that are also equipped with V2X systems, a cloud computing device, and/or to/from infrastructure communication locations equipped with a V2X system.
Based on thedynamic obstacles data56 fromperception system50 and data from the V2X system24 (if present), theinference module46 can selectively adjust one or more costs for different sets of motions (such as, for example, traveling in the current lane, changing lanes to the left, changing lanes to the right, changing lanes to the left, etc.) for different trajectories along the route received from the globalroute planner module42. For example, a particular set of motions associated with a particular trajectory for traveling along the route received from the globalroute planner module42 may have: (i) an associated cost-to-goal; (ii) one or more associated motion-costs; and/or (iii) an associated collision-cost.
The associated cost-to-goal corresponds to the designated cost from a current or local location to the destination location. For example, for any two sets of motions associated with two trajectories for traveling along the route, a set of motions that has a higher distance to the destination location may have a higher associated cost-to-goal. Additionally or alternatively, for any two particular sets of motions associated with two trajectories for traveling along the route, a set of motions that has a longer estimated travel time to the destination location may have a higher associated cost-to-goal. The cost-to-goal metric can be selected to place relatively more or less emphasis or weight on distance or on travel time.
The one or more associated motion-costs include the cost of executing motion commands associated with the particular set of motions. For example, frequently changing lanes may require multiple motions for the vehicle and should be avoided. As such, a set of motions that includes changing lanes may have a higher motion-cost than a comparable set of motions that does not require changing lanes, with all other things being equal. Likewise, frequently applying the brakes may require multiple braking motions and should be avoided. As such, a set of motions that includes increased braking will have a higher motion-cost than a comparable set of motions that does not require increased braking. Additionally, traveling at a speed that is greater than the associated speed limit for the particular road should not occur and, as a result, any set of motions that includes traveling at a speed that is greater than the associated speed limit should have a relatively higher associated motion-cost.
The collision-cost is a cost element added as a result of a probability of an anticipated collision with other vehicles, pedestrians, and/or on-road obstacles, such as broken tires, sandbags, etc. A collision with a pedestrian should be avoided by all means and hence will have the highest associated collision-cost. A collision with another vehicle may have the next highest associated collision-cost. A collision with an on-road obstacle may have the next highest associated collision-cost. The collision-costs may be set such that a set of motions that avoids any collision will be chosen over a set of motions that requires a collision. In the event two sets of motions are presented that each include a collision (i.e., a collision is unavoidable) the collision-costs may be set such that a set of motions that includes a collision with an on-road obstacle, such as a sandbag, may be chosen over a set of motions that includes a collision with a pedestrian or a collision with another vehicle.
Theinference module46 calculates a total cost for each of the potential sets of motions associated with each of the potential trajectories for traveling along the received route based on the following formula:
total cost=cost-to-goal+sum(motion-cost(s))+collision-cost+inferred-cost, (1)
where the cost-to-goal, the motion-cost(s), and the collision-cost are described above, and the inferred-cost is determined by theinference module46, as described below. The term “sum(motion-cost(s))” refers to the summation of all motion-costs associated with the particular set of motions, as described above.
Theinference module46 can selectively increase or decrease the inferred-cost term based on thedynamic obstacles data56 from theperception system50 and data from the V2X system24 (if present). In this way, theinference module46 can selectively increase or decrease the total cost for the set of motions for traveling along the route received from the globalroute planner module42 based on the inferred or implied information about the environment of the self-drivingvehicle10. For example, theinference module46 may determine the inferred-cost term based on a probability (P) of the cost multiplied by a predetermined constant term. For example, the inferred-cost term may be initialized based on a predetermined probability distribution, such as:
P(inferred-cost is high value)=0.05, or (2)
P(inferred-cost is low value)=0.95, (3)
with the high value being an inferred cost associated with an obstacle, such as a broken tire or dead animal, lying in the path of the self-driving vehicle and the low value being associated with the path being clear of such obstacles. In other words, at initialization and without further information, theinference module46 may assume that the probability of a broken tire or dead animal blocking the path of the self-drivingvehicle10 is relatively low.
During operation, theinference module46 may then receive information about the dynamic environment of the self-drivingvehicle10. For example, as discussed above, theinference module46 may receivedynamic obstacles data56 from theperception system50 and data from the V2X system24 (if present). Once the behavior of other dynamic obstacles in the environment is observed, theinference module46 may then update the probabilities using Bayes rule, based on predetermined likelihood functions. For example, theinference module46 may observe information about preceding vehicles changing lanes in front of the self-drivingvehicle10, and may adjust the probabilities based on the following:
P(preceding vehicle changes lane|inferred-cost is high)=0.9, (4)
P(preceding vehicle remains in current lane|inferred-cost is high)=0.1, (5)
P(preceding vehicle changes lane|inferred-cost is low)=0.3, or (6)
P(preceding vehicle remains in current lane|cost is low)=0.7. (7)
Based on the above probabilities, theinference module46 may then calculate a posterior probability of:
posteriorP(inferred-cost is high|preceding vehicle changes lane). (8)
In other words, theinference module46 may calculate the posterior probability of the inferred-cost being high based on the observed behavior of dynamic objects, such as a preceding vehicle changing lanes, in the environment of the self-drivingvehicle10. While the above example is given for the observation of whether a preceding vehicle changes lanes, theinference module46 can make similar calculations based on other observed behavior of dynamic objects, such as other traffic participants, other vehicles, pedestrians, cyclists, animals, etc. Once the posterior probability of the inferred-cost being high is calculated, theinference module46 calculates the inferred-cost accordingly based on the calculated posterior probability and the predetermined constant.
Once theinference module46 calculates the inferred-costs and adjusted total costs for the different sets of motions associated with the different potential trajectories for traveling along the route received from the globalroute planner module42, theinference module46 outputs the adjusted total costs for each of the different sets of motions to themotion planner module48. Themotion planner module48 receives the route and the adjusted total costs for the different sets of motions associated with the different potential trajectories for traveling along the route from theinference module46. Themotion planner module48 also receives vehicle information from thevehicle information database30 and motion primitive information from the motionprimitive database32. Themotion planner module48 also receivesdynamic obstacles data56,static obstacles data58, androad geometry data60 from theperception system50. Themotion planner module48 also receives localization/inertial data54 from theperception system50. Based on the vehicle information, motion primitive information, traffic rules,dynamic obstacles data56,static obstacles data58,road geometry data60, and localization/inertial data54, themotion planner module48 selects a set of motions for the self-driving vehicle to execute for a particular trajectory for traveling along the received route. For example, the potential sets of motions for the self-driving vehicle may include a set of motions for continuing forward in the current lane of travel and a set of motions for changing lanes to the left, with each set of motions having an associated adjusted total cost. Themotion planner module48 may select a set of motions, based on the received information and the associated adjusted total costs, and generate a smooth trajectory for the self-drivingvehicle10 to follow over a predetermined time period. For example, the predetermined time period may be 30 seconds, although other predetermined time periods may be used. The smooth trajectory, for example, may consist of a sequence of one or more motion primitives selected for the self-drivingvehicle10 from the motionprimitive database32. Themotion planner module48 then outputs the determined smooth trajectory to thetrajectory follower module62.
Thetrajectory follower module62 receives the smooth trajectory from themotion planner module48 and controls thevehicle actuation systems66 so that the self-drivingvehicle10 follows the determined smooth trajectory. In other words, thetrajectory follower module62 appropriately controls thesteering system12, thethrottle system14, and thebraking system16 so that the self-drivingvehicle10 follows the determined smooth trajectory outputted by themotion planner module48.
An example operation of the inference-aware motion planning system of the present teachings is described with reference toFIG. 3. InFIG. 3, the self-drivingvehicle10 is traveling in aright lane74 of aroad70 that also includes aleft lane72. A firstsecondary vehicle76 and a secondsecondary vehicle78 are also traveling in theright lane74 of theroad70. Anobstacle80, such as a tire, is located in theright lane74 ahead of the firstsecondary vehicle76. The self-drivingvehicle10 may be traveling on a route that includes a first path corresponding to theright lane74 of theroad70 and a second path that corresponds to theleft lane72 of theroad70. In this example, both theleft lane72 and theright lane74 may lead to the same ultimate destination location. As the firstsecondary vehicle76 approaches the obstacle, the firstsecondary vehicle76 changes lanes to the left to avoid theobstacle80, as shown byarrow82. At this point, the self-drivingvehicle10 may not directly observe theobstacle80 with itsvehicle sensors22. The self-drivingvehicle10 may, however, observe the firstsecondary vehicle76 change lanes to the left, as indicated byarrow82. At this point, theplanning system20, and specifically theinference module46, may increase the inferred-cost and, consequently, the total cost associated with traveling in theright lane74 of theroad70. Based on the increased cost of the traveling in theright lane74, theplanning system20 of the self-drivingvehicle10 may decide to change lanes to theleft lane72, as indicated byarrow86. Alternatively, based on the other cost values, the self-drivingvehicle10 may continue to travel in theright lane74 with the increased associated total cost. The self-drivingvehicle10 may then observe the secondsecondary vehicle78 change lanes to theleft lane72 as indicated byarrow84, but may not yet directly observe theobstacle80 in theright lane74. At this point, theplanning system20, and specifically theinference module46, may again increase the inferred-cost and the associated total cost of traveling in theright lane74 of theroad70. With both preceding vehicles, i.e., the first secondary vehicle and the right secondary vehicle, having changed lanes to theleft lane72, the self-drivingvehicle10, and specifically theplanning system20 and theinference module46, may determine that there is an increased probability of anobstacle80 being located in theright lane74 even though the self-drivingvehicle10 has not yet directly observed theobstacle80. At this point, based on the additional increased cost of traveling in theright lane74, theplanning system20 of the self-drivingvehicle10 may then decide to change lanes to theleft lane72, as indicated byarrow86.
Another example operation of the inference-aware motion planning system of the present teachings is described with reference toFIG. 4. InFIG. 4, the self-drivingvehicle10 is traveling in amiddle lane104 of aroad100 that also includes aleft lane102 and aright lane106. A firstsecondary vehicle110, a secondsecondary vehicle112, and a thirdsecondary vehicle114 are traveling in theleft lane102 of theroad100. A fourthsecondary vehicle116 is traveling in amiddle lane104 of theroad100. A fifthsecondary vehicle118, a sixthsecondary vehicle120, and a seventhsecondary vehicle122 are traveling in theright lane106 of theroad100. The self-drivingvehicle10 may be traveling on a route that includes a first path corresponding to themiddle lane104, a second path corresponding to theleft lane102, and a third path corresponding to theright lane106. In this example, theleft lane102, themiddle lane104, and theright lane106 may lead to the same ultimate destination location. Initially, the total cost for traveling in theleft lane102, themiddle lane104, and theright lane106 may be approximately equal. Thevehicle sensors22 of the self-drivingvehicle10 may observe, however, that thesecondary vehicles110,112,114 traveling in theleft lane102 are moving slower than thesecondary vehicle116 and the self-drivingvehicle10, traveling in themiddle lane104 and even slower than thesecondary vehicles118,120,122 traveling in theright lane106. Thevehicles sensors22 of the self-drivingvehicle10 may further observe that thesecondary vehicles118,120,122 in the right lane are traveling faster than thesecondary vehicle116 and self-drivingvehicle10 traveling in themiddle lane104 and even faster than thesecondary vehicles110,112,114 traveling in the left lane. At this point, theplanning system20, and specifically theinference module46, may increase the inferred-cost and, consequently, the total cost, associated with traveling in theleft lane102 of theroad100. Similarly, theplanning system20, and specifically theinference module46, may decrease the inferred-cost and, consequently, the total cost, associated with traveling in theright lane106 of theroad100. Based on the decreased cost associated with traveling in theright lane106, theplanning system20 of the self-drivingvehicle10 may decide to change lanes to theright lane106, once it is safe to change lanes, to travel in a faster lane.
Another example operation of the inference-aware motion planning system of the present teachings is described with reference toFIG. 5. InFIG. 5, the self-drivingvehicle10 is traveling in a parking lot150 that includes a number of already parked vehicles. In particular, thevehicle sensors22 of the self-drivingvehicle10 may be able to directly observe the already parkedvehicles152 in the parking lot. The parking lot150 may also include a number ofparking spots154 that thevehicle sensors22 and the self-drivingvehicle10 cannot directly observe. In other words, thevehicle sensors22 and the self-drivingvehicle10 cannot directly observe whether vehicles are already parked in the parking spots154, which may or may not be occupied with parked vehicles. As shown inFIG. 5, the parking lot150 includes afirst lane156 and asecond lane158. Initially, the costs associated with traveling in thefirst lane156 and traveling in thesecond lane158 may be approximately equal. Thevehicle sensors22 of the self-drivingvehicle10 may observe asecondary vehicle160 turning and traveling down thesecond lane158. Given that an available parking spot in thesecond lane158 may be taken by thesecondary vehicle160, theplanning system20, and specifically theinference module46, may increase the inferred-cost and, consequently, the total cost, associated with thesecond lane158. Additionally or alternatively, theplanning system20, and specifically theinference module46, may decrease the inferred-cost and, consequently, the total cost, associated with thefirst lane156. As such, as between thefirst lane156 and thesecond lane158, theplanning system20 of the self-drivingvehicle10 may choose to travel to thefirst lane156 to look for an available parking spot.
With reference toFIG. 6, a flow diagram of analgorithm600 for inference aware motion planning is shown. Thealgorithm600, for example, can be performed by theinference module46 of theplanning system20 to adjust the total costs associated with different sets of motions associated with different trajectories for traveling along the route outputted by the globalroute planner module42 of theplanning system20. The algorithm starts at602. At604, theinference module46 receives the route from the globalroute planner module42 and receives the initial associated costs for the different sets of motions associated with different trajectories for traveling along the route. The initial associated costs include, for example, the cost-to-goal, the motion-cost(s), and the collision-cost values, as described in detail above. At606, theinference module46 receives dynamic environmental information about the surroundings of the self-driving vehicle. For example, theinference module46 can receivedynamic obstacles data56 from theperception system50 based on data received fromvehicle sensors22, as described in detail above. In addition, theinference module46 may receive data from aV2X system24, if present, as described in detail above.
At608, theinference module46 computes the posterior probability for the inferred costs of the sets of the motions associated with the different trajectories for traveling along the route received from the globalroute planner module42, based on the dynamic environmental information, as described in detail above. At610, theinference module46 updates the inferred costs and resulting total costs for each set of motions associated with the different trajectories for traveling along the route received from the globalroute planner module42, as described in detail above. At612, theinference module46 outputs the updated total costs for each of the sets of motions for traveling along the route received from the globalroute planner module42 to themotion planner module48, as discussed in detail above. Theinference module46 then loops back to606 and receives additional dynamic environmental information, as discussed in detail above.
Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms, and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
In this application, including the definitions below, the terms “module” and “system” may refer to, be part of, or include circuits or circuitry that may include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware. The code is configured to provide the features of the modules and systems described herein. In addition, in this application the terms “module” and “system” may be replaced with the term “circuit.”
The terminology used is for the purpose of describing particular example embodiments only and is not intended to be limiting. The singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “comprising,” “including,” and “having,” are inclusive and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to,” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). The term “and/or” includes any and all combinations of one or more of the associated listed items.
Although the terms first, second, third, etc. may be used to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.