RELATED APPLICATIONSThis application claims benefit of priority to Provisional U.S. Patent Application No. 62/412,041, filed on Oct. 24, 2016; and to Provisional U.S. Patent Application No. 62/357,903, filed on Jul. 1, 2016; each of the aforementioned priority applications being hereby incorporated by reference in their respective entirety.
TECHNICAL FIELDExamples described herein relate to a submap system for autonomously operating vehicles.
BACKGROUNDVehicles are increasingly implementing autonomous control. Many human-driven vehicles, for example, have modes in which the vehicle can follow in a lane and change lanes.
Fully autonomous vehicles refer to vehicles which can replace human drivers with sensors and computer-implemented intelligence, sensors and other automation technology. Under existing technology, autonomous vehicles can readily handle driving with other vehicles on roadways such as highways.
Autonomous vehicles, whether human-driven hybrids or fully autonomous, operate using data that provides a machine understanding of their surrounding area.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an example map system for enabling autonomous control and operation of a vehicle.
FIG. 2 illustrates a submap network service, according to one or more embodiments.
FIG. 3 illustrates a submap data aggregation that stores and links multiple versions of submaps, collectively representing linked roadway segments for a geographic region, according to one or more examples.
FIG. 4 illustrates an example of a control system for an autonomous vehicle.
FIG. 5 is a block diagram of a vehicle system on which an autonomous vehicle control system may be implemented.
FIG. 6 is a block diagram of a network service or computer system on which some embodiments may be implemented.
FIG. 7 illustrates an example method for operating a vehicle using a submap system, according to one or more examples.
FIG. 8 illustrates an example method for distributing mapping information to vehicles of a geographic region for use in autonomous driving, according to one or more examples.
FIG. 9 illustrates an example method for providing guidance to autonomous vehicles.
FIG. 10 illustrates an example sensor processing sub-system for an autonomous vehicle, according to one or more embodiments.
FIG. 11 illustrates an example of a vehicle on which an example ofFIG. 10 is implemented.
FIG. 12 illustrates an example method for determining a location of a vehicle in motion using vehicle sensor data, according to an embodiment.
FIG. 13 illustrates a method for determining a location of a vehicle in motion using image data captured by the vehicle, according to an embodiment.
FIG. 14 illustrates a method for determining a location of a vehicle in motion using an image point cloud and image data captured by the vehicle, according to an embodiment.
FIG. 15 illustrates an example method in which the perception output is used by a vehicle to process a scene.
DETAILED DESCRIPTIONExamples herein describe a system to use submaps to control operation of a vehicle. A storage system may be provided with a vehicle to store a collection of submaps that represent a geographic area where the vehicle may be driven. A programmatic interface may be provided to receive submaps and submap updates independently of other submaps.
As referred to herein, a submap is a map-based data structure that represents a geographic area of a road segment, with data sets that are computer-readable to facilitate autonomous control and operation of a vehicle. In some examples, a submap may include different types of data components that collectively provide a vehicle with information that is descriptive of a corresponding road segment. In some examples, a submap can include data that enables a vehicle to traverse a given road segment in a manner that is predictive or responsive to events which can otherwise result in collisions, or otherwise affect the safety of people or property. Still further, in some examples, a submap provides a data structure that can carry one or more data layers which fulfill a data consumption requirement of a vehicle when the vehicle is autonomously navigated through an area of a road segment. The data layers of the submap can include, or may be based on, sensor information collected from a same or different vehicle (or other source) which passed through the same area on one or more prior instances.
One or more embodiments described herein provide that methods, techniques, and actions performed by a computing device are performed programmatically, or as a computer-implemented method. Programmatically, as used herein, means through the use of code or computer-executable instructions. These instructions can be stored in one or more memory resources of the computing device. A programmatically performed step may or may not be automatic.
One or more embodiments described herein can be implemented using programmatic modules, engines, or components. A programmatic module, engine, or component can include a program, a sub-routine, a portion of a program, or a software component or a hardware component capable of performing one or more stated tasks or functions. As used herein, a module or component can exist on a hardware component independently of other modules or components. Alternatively, a module or component can be a shared element or process of other modules, programs or machines.
Numerous examples are referenced herein in context of an autonomous vehicle. An autonomous vehicle refers to any vehicle which is operated in a state of automation with respect to steering and propulsion. Different levels of autonomy may exist with respect to autonomous vehicles. For example, some vehicles today enable automation in limited scenarios, such as on highways, provided that drivers are present in the vehicle. More advanced autonomous vehicles drive without any human driver inside the vehicle. Such vehicles often are required to make advance determinations regarding how the vehicle is behave given challenging surroundings of the vehicle environment.
Map System
FIG. 1 illustrates an example map system for enabling autonomous control and operation of a vehicle. In an example ofFIG. 1, a submap information processing system (“SIPS100”) may utilize submaps which individually represent a corresponding road segment of a road network. By way of example, each submap can represent a segment of a roadway that may encompass a block, or a number of city blocks (e.g., 2-5 city blocks). Each submap may carry multiple types of data sets, representing known information and attributes of an area surrounding the corresponding road segment. The SIPS100 may be implemented as part of a control system for avehicle10 that is capable of autonomous driving. In this way, the SIPS100 can be implemented to enable thevehicle10, operating under autonomous control, to obtain known attributes and information for an area of a road segment. The known attributes and information, which are additive to the identification of the road network within the submap, enable thevehicle10 to responsively and safely navigate through the corresponding road segment.
Among other utilities, the SIPS100 can provide input for an autonomous vehicle control system400 (seeFIG. 4), in order to enable thevehicle10 to operate and (i) plan/implement a trajectory or route through a road segment based on prior knowledge about the road segment, (ii) process sensor input about the surrounding area of the vehicle with understanding about what types of objects are present, (iii) detect events which can result in potential harm to the vehicle, or persons in the area, and/or (iv) detect and record conditions which can affect other vehicles (autonomous or not) passing through the same road segment. In variations, other types of functionality can also be implemented with use of submaps. For example, in some variations, individual submaps can also carry data for enabling thevehicle10 to drive under different driving conditions (e.g., weather variations, time of day variations, traffic variations, etc.).
In some examples, thevehicle10 can locally store a collection of storedsubmaps105 which are relevant to a geographic region that thevehicle10 is anticipated to traverse during a given time period (e.g., later in trip, following day, etc.). The collection ofstored submaps105 may be retrieved from, for example, a submap network service200 (seeFIG. 2) that maintains and updates a larger library of submaps for multiple vehicles (or user-vehicles).
With respect to thevehicle10, each of thestored submaps105 can represent an area of a road network, corresponding to a segment of the road network and its surrounding area. As described with some examples, individual submaps may include a collection of data sets that represent an area of the road segment within a geographic region (e.g., city, or portion thereof). Furthermore, each of thesubmap105 can include data sets (sometimes referred to as data layers) to enable anautonomous vehicle10 to perform operations such as localization, as well as detection and recognition of dynamic objects.
In an example ofFIG. 1, theSIPS100 includes asubmap retrieval component110, asubmap processing component120, asubmap network interface130, asubmap manager136, and roadway data aggregation processes140. As theSIPS100 may be implemented as part of theAV control system400, theSIPS100 may utilize or incorporate resources of thevehicle10, including processing and memory resources, as well as sensor devices of the vehicle (e.g., Lidar, stereoscopic and/or depth cameras, video feed sonar, radar, etc.). In some examples, theSIPS100 employs thesubmap network interface130, in connection with the submap network service200 (FIG. 2), to receive new orreplacement submaps131 and/or submap updates133. In some examples, thesubmap network interface130 can utilize one or more wireless communication interfaces94 of thevehicle10 in order to wireless communicate with the submap network service200 (e.g., seeFIG. 2) and receive new orreplacement submaps131 and/or submap updates133. In variations, thesubmap network interface130 can receive new orreplacement submaps131 and/orsubmap updates133 from other remote sources, such as other vehicles.
In addition to receiving the new orreplacement submaps131 andsubmap updates133, thesubmap network interface130 can communicatevehicle data111 to thesubmap network service200. Thevehicle data111 can include, for example, the vehicle location and/or vehicle identifier.
Thesubmap manager136 can receive the new orreplacement submaps131 and/orsubmap updates133, and create a stored collection ofsubmaps105 utilizing anappropriate memory component104A,104B. In some examples, the submaps have a relatively large data size, and thevehicle10 retrieves thenew submaps131 when such submaps are needed. Thesubmap network interface130 can also receivesubmap updates133 for individual submaps, or groups of submaps, stored as part of thecollection105. Thesubmap manager136 can include processes to manage the storage, retrieval and/or updating of storedsubmaps105, in connection with, for example, the submap network service200 (seeFIG. 2) and/or other submap data sources (e.g., other vehicles).
In some examples, thesubmap manager136 can implementco-location storage operations109 as a mechanism to manage the stored submaps of thecollection105 in a manner that enables the data sets of the submaps to be rapidly retrieved and utilized by anAV control system400. In some examples, the individual submaps of thecollection105 may include a combination of rich data sets which are linked by other data elements (e.g., metadata). An example submap with organized data layers is provided withFIG. 3. Given the range in velocity ofvehicle10, and the amount of data which is collected and processed through the various sensors of thevehicle10, examples recognize that storing the data sets of individual submaps in physical proximity to one another on thememory components104A,104B of thevehicle10 can reduce memory management complexity and time lag when individual submaps of thecollection105 are locally retrieved and utilized. Examples further recognize that physically grouping individually storedsubmaps105, representing adjacent or proximate geographic areas in physical proximity to one another, onrespective memory components104A,104B of thevehicle10 further promotes the ability of theSIPS100 to make timely transitions from one submap to another.
In the example shown byFIG. 1, theSIPS100 utilizesmultiple memory components104A,104B (collectively “memory components104”). Thesubmap manager136 can implementco-location storage operations109 to storesubmaps105 relating to a particular area or sub-region of a road network on only one of the memory components104. In variations, thesubmap manager136 can implementco-location storage operations109 to identify memory cells of the selected memory component104 which are adjacent or near one another for purpose of carrying data of a given submap, or data for two or more adjacent submaps.
According to some examples, thesubmap retrieval component110 includes processes for performing a local search or retrieval for storedsubmaps105 provided with the memory components104. Thesubmap retrieval component110 can signalsubmap selection input123 to thesubmap manager136 in order to locally retrieve107 one or more submaps125 for immediate processing (e.g., sub-region for upcoming segment of trip). In some instances, examples provide that theselection input123 can be generated from a source that provides an approximate location of thevehicle10. In one implementation, theselection input123 is used to retrieve an initial set ofsubmaps125 for a road trip of thevehicle10. Theselection input123 may be obtained from, for example, the last known location of the vehicle prior to the vehicle being turned off in the prior use. In other variations, theselection input123 can be obtained from a location determination component (e.g., a satellite navigation component, such as provided by a Global Navigation Satellite System (or “GNSS”) type receiver) of thevehicle10.
Thesubmap manager136 may respond to receiving theselection input123 by accessing a database of the local memory components104 where a relevant portion of the collection ofsubmaps105 are stored. Thesubmap manager136 may be responsive to theselection input123, in order to retrieve from the local memory components104 an initial set ofsubmaps125. The initial set ofsubmaps125 can include one or multiple submaps, each of which span a different segment of a road or road network that includes, for example, a geographic location corresponding to theselection input123.
Each of the storedsubmaps105 may include data layers corresponding to multiple types of information about a corresponding road segment. For example, submaps may include data to enable theSIPS100 to generate a point cloud of its environment, with individual points of the cloud providing information about a specific point in three-dimensional space of the surrounding environment. In some examples, the individual points of the point cloud may include or be associated with image data that visually depict a corresponding point in three-dimensional space. Image data which forms individual points of a point cloud are referred to as “imagelets”. In some examples, the imagelets of a point cloud may depict surface elements, captured through Lidar (sometimes referred to as “surfels”). Still further, in some examples, the imagelets of a point cloud may include other information, such as a surface normal (or unit vector describing orientation). As an addition or variation, the points of the point cloud may also be associated with other types of information, including semantic labels, road network information, and/or a ground layer data set. In some examples, each of the storedsubmaps105 may include afeature set113 that identifies features which are present in a surrounding area of the road segment corresponding to that submap.
Thesubmap processing component120 may include submap startlogic114 for scanning individual submaps of an initially retrieved submap set125, to identify the likely submap for an initial location of thevehicle10. In one implementation, thesubmap processing component120 implements thestart component122 as a coarse or first-pass process to compare the submap feature set113 of an initially retrieved submap against acurrent sensor state493, as determined from one or more sensor interfaces or components of the vehicle'ssensor system492. Thestart logic114 may perform the comparison to identify, for example, acurrent submap145 of the initial set which contains the feature of a landmark detected as being present in thecurrent sensor state493 of thevehicle10. Once thecurrent submap145 is identified, thesubmap processing component120 can perform a more refined localization process using thecurrent submap145, in order to determine a more precise location of thevehicle10 relative to the starting submap. In some examples, thesubmap processing component120 can track the movement of thevehicle10 in order to coordinate the retrieval and/or processing of a next submap that is to be thecurrent submap145, corresponding to an adjacent road segment that the vehicle traverses on during a trip.
With further reference to an example ofFIG. 1, thesubmap retrieval component110 can select thecurrent submap145 for processing by thesubmap processing component120. Thesubmap processing component120 can process thecurrent submap145 contemporaneously, or near contemporaneously, with the vehicle's traversal of the corresponding road segment. The data layers provided with thecurrent submap145 enable thevehicle10 to drive through the road segment in a manner that is predictive or responsive to events or conditions which are otherwise unknown.
According to some examples, the submap retrieval andprocessing components110,120 can execute to retrieve and process a series ofsubmaps125 in order to traverse a portion of a road network that encompasses multiple road segments. In this manner, each submap of the series can be processed as thecurrent submap145 contemporaneously with thevehicle10 passing through the corresponding area or road segment. In some examples, thesubmap processing component120 extracts, or otherwise determines the submap feature set113 for an area of the road network that the vehicle traverses. Thesubmap processing component120 compares the submap feature set113 of thecurrent submap145 to thecurrent sensor state493 as provided by the vehicle'ssensor system492. The comparison can involve, for example, performing transformations of sensor data, and/or image processing steps such as classifying and/or recognizing detected objects or portions of a scene.
As the vehicle progresses on a trip, some examples provide for thesubmap processing component120 to use trackinglogic118 to maintain an approximate position of thevehicle10 until localization is performed. The trackinglogic118 can process, for example, telemetry information (e.g., accelerometer, speedometer) of the vehicle, as well as follow on sensor data from thesensor system492 of the vehicle, to approximate the progression and/or location of the vehicle as it passes through a given area of a submap. The trackinglogic118 can trigger and/or confirm the progression of the vehicle from, for example, one submap to another, or from one location within a submap to another location of the same submap. After a given duration of time, thesubmap processing component120 can process a next submap contemporaneously with the vehicle's progression into the area represented by the next submap.
In some examples, thesubmap processing component120 processes thecurrent submap145 to determine outputs for use with different logical elements of theAV control system400. In one implementation, the output includeslocalization output121, which can identify a precise or highly granular location of the vehicle, as well as the pose of the vehicle. In some examples, the location of the vehicle can be determined to a degree that is more granular than that which can be determined from, for example, a satellite navigation component. As an addition or variation, the output of thesubmap processing component120 includes object data sets, which locate and label a set of objects detected from the comparison of thecurrent sensor state493 and the submap feature set113.
According to some examples, thesubmap processing component120 can includelocalization component122 to perform operations for determining the localization output. Thelocalization output121 can be determined at discrete instances while thevehicle10 traverses the area of the road segment corresponding to thecurrent submap145. Thelocalization output121 can include location coordinate117 and pose119 of thevehicle10 relative to thecurrent submap145. In some examples, thelocalization component122 can compare information from the current sensor state493 (e.g., Lidar data, imagery, sonar, radar, etc.) to the feature set113 of thecurrent submap145. Through sensor data comparison, the location of thevehicle10 can be determined with specificity that is significantly more granular than what can be determined through use of a satellite navigation component. In some examples, the location coordinates117 can specify a position of the vehicle within the reference frame of thecurrent submap145 to be of a magnitude that is less than 1 foot (e.g., 6 inches or even less, approximate diameter of a tire, etc.). In this way, the location coordinates117 can pinpoint the position of thevehicle10 both laterally and in the direction of travel. For example, for a vehicle in motion, the location coordinates117 can identify any one or more of: (i) the specific lane the vehicle occupies, (ii) the position of the vehicle within an occupied lane (e.g., on far left side of a lane) of the vehicle, (iii) the location of the vehicle in between lanes, and/or (iv) a distance of the vehicle from a roadside boundary, such as a shoulder, sidewalk curb or parked car.
As an addition or a variation, thesubmap processing component120 can includeperception component124 which providesperception output129 representing objects that are detected (through analysis of the current sensor state493) as being present in the area of the road network. Theperception component124 can determine the perception output to include, for example, a set of objects (e.g., dynamic objects, road features, etc.). In determining theperception output129, theperception component124 can compare detected objects from thecurrent sensor state493 with known and static objects identified with the submap feature set113. Theperception component124 can generate theperception output129 to identify (i) static objects which may be in the field of view, (ii) non-static objects which may be identified or tracked, (iii) an image representation of the area surrounding a vehicle with static objects removed or minimized, so that the remaining data of thecurrent sensor state493 is centric to dynamic objects.
In order to navigate the vehicle on a trip, thesubmap retrieval component110 identifies and retrieves next submap(s) from thesubmap manager136. The next submaps that are retrieved by thesubmap retrieval component110 can be identified from, for example, a determined trajectory of thevehicle10 and/or a planned or likely route of thevehicle10. In this way, thesubmap retrieval component110 can repeatedly process, during a given trip, a series of submaps to reflect a route of the vehicle over a corresponding portion of the road network. Thesubmap processing component120 can process thecurrent submaps125 from each retrieved set in order to determinelocalization output121 andperception output129 for theAV control system400.
According to some examples, the storedsubmaps105 can be individually updated, independently of other submaps of a geographic region. As a result, theSIPS100 can manage updates to its representation of a geographic region using smaller and more manageable units of target data. For example, when conditions or events to a specific segment of the road network merit an update, theSIPS100 can receive and implement updates to a finite set of submaps (e.g., one to three submaps, square kilometer or half-kilometer, etc.) rather than update a map representation for the larger geographic region. Additionally, the ability for thesubmap processing component120 to use submaps which are independently updated allows for thevehicle10 and/or other vehicles of the geographic region to aggregate information for enabling updates to submaps used on other vehicles.
As described with other examples, thevehicle10 can operate as part of a group of vehicles (or user-vehicles) which utilize submaps in order to autonomously navigate through a geographic region. In cases where multiple vehicles using submaps traverse the road network of a given geographic region, some embodiments provide that individual vehicles can operate as observers for conditions and patterns from which submap features can be determined. As described with other examples, the submap network service200 (FIG. 2) can implement a variety of processes in order to generate sensor data, labels, point cloud information and/or other data from which submap data can be generated and used to update corresponding submaps. For a given geographic region, different submaps can be updated based on events, changing environmental conditions (e.g. weather) and/or refinements to existing models or submap feature sets.
With operation of thevehicle10, the roadway data aggregation processes140 can receive and aggregatesensor data input143 from one or more vehicle sensor sources (shown collectively as vehicle sensor system492). Thesensor data input143 can, for example, originate in raw or processed form from sensors of thevehicle10, or alternatively, from sensor components of the vehicle which process the raw sensor data. The roadway data aggregation processes140 can process and aggregate thesensor data input143 to generate aggregatedsensor data141. The aggregatedsensor data141 may be generated in accordance with a protocol, which can specify raw data processing steps (e.g., filtering, refinements), data aggregation, conditions for synchronous or asynchronous (e.g., offline) transmissions of aggregatedsensor data141 and/or other aspects of sensor data aggregation, storage, and transmission.
In one implementation, theaggregate sensor data141 can be transmitted to thesubmap network service200 via thewireless communication interface94 of thevehicle10. In variations, theaggregate sensor data141 can be used to generate local updates for one or more storedsubmaps105. In some variations, the roadway data aggregation processes140 can collectsensor data input143, and perform, for example, variance analysis usingvariance logic144. Thevariance logic144 may be used to generate alocal submap update149, which can be used to update a corresponding submap of thecollection105 via thesubmap manager136.
While examples provide for submaps to be independently updated, examples further recognize that updates to submaps can make the use of such submaps incompatible with other submaps. For example, if one submap is of a given area is updated while an adjacent submap is not, then thesubmap processing component120 may not able to transition from one submap to the adjacent submap. By way of example, the update for the submap may cause thesubmap processing component120 to process the submap using an algorithm or logic that is different than what was previously used. In some examples, thesubmap processing component120 can be updated in connection with updates to submaps that are received and processed on that vehicle. For example,new submaps131 received by thesubmap network interface130 may include instructions, code, or triggers that are executable by the SIPS100 (e.g., by the submap manager136) to cause thevehicle10 to retrieve or implement a particular logic from which the submap is subsequently processed.
According to some examples, thenew submaps131 retrieved from the submap network service200 (or other remote source) are versioned to reflect what submap updates are present on the particular submap. In some examples, an update to a given submap can affect a particular type of data set or data layer on the submap. Still further, in other variations, the update to the submap can be programmatic (e.g., alter an algorithm used to process a data layer of the submap) or specific to data sets used by processes which consume the data layer of the submap.
Still further, in some variations, thesubmap retrieval component110 may includeversioning logic116 which identifies the version of the submap (e.g., from the UID of the submap) and then retrieves a next submap that is of the same or compatible version. As described with other examples, thenew submaps131 of the collection can be structured to include connector data sets308 (seeFIG. 3) which enables the vehicle to stitch consecutive submaps together as thevehicle10 progresses through a road network.
Submap Network Service
FIG. 2 illustrates a submap network service, according to one or more embodiments. In one implementation, thesubmap network service200 can be implemented on a server, or combination of servers, which communicate with network enabled vehicles that traverse a road network of a geographic region. In a variation, thesubmap network service200 can be implemented in alternative computing environments, such as a distributed environment. For example, some or all of the functionality described may implemented on a vehicle, or combination of vehicles, which collectively form a mesh or peer network. In some examples, a group of vehicles, in operation within a geographic region, may implement a mesh network, or peer-to-peer network, to transmit and receive data, including submaps and data for creating or updating submaps.
In an example ofFIG. 2, thesubmap network service200 includes avehicle interface210, avehicle monitor220, a sensordata analysis sub-system230 and asubmap service manager240. Thevehicle interface210 provides the network interface that can communicate with one or multiple vehicles in a given geographic region. In some implementations, thevehicle interface210 receives communications, which includevehicle data211, from individual vehicles that wirelessly communicate with thesubmap network service200 during their respective operations. The vehicle monitor220 can receive, store and manage various forms ofvehicle data211, including avehicle identifier213, avehicle location215, and acurrent submap version217 for each vehicle. Thevehicle data211 can be stored in, for example,vehicle database225.
According to some examples, thevehicle monitor220 manages the transmission of new submaps231 andsubmap updates233 to vehicles of the geographic region. The vehicle monitor220 retrieves a set ofsubmaps237 forindividual vehicles10 from thesubmap service manager240. In one implementation, the vehicle monitor220 retrieve separate sets ofsubmaps237 for different vehicles, based on thevehicle data211 stored in thevehicle database225. For example, the vehicle monitor220 can retrieve asubmap set237 for a given vehicle using thevehicle identifier213, thevehicle location215 associated with the vehicle identifier, and/or thecurrent submap version217 for the vehicle identifier.
Thesubmap service manager240 can manage storage and retrieval ofindividual submaps239 from asubmap database248. One or multiple submap sources can create submaps and/or update individual submap stored in thesubmap database248 or similar memory structure. Thesubmap service manager240 can include a submap selection andversion matching component242 that can select sets ofsubmaps237 for individual vehicles of a geographic region. The submap selection/version matching component242 can select sets ofsubmaps237 for individual vehicles, based on thevehicle location215 and thevehicle submap version217. In response to receiving thevehicle location215, for example, the submap selection/version matching component242 may search thesubmap database248 to identify submaps for the geographic region of the vehicle, having a same or compatible submap version.
To retrieve a submap for avehicle10, the vehicle monitor220 may communicate selection input235 (based on or corresponding to the vehicle location215) for the vehicle, as well as other information which would enable thesubmap service manager240 to select the correct submap and version for the vehicle at the given location. For example, thevehicle database225 can associate a vehicle identifier with the submap version of thevehicle10. In variations, thevehicle10 can communicate its submap version when requesting submaps from thesubmap network service200.
Thesubmap service manager240 can initiate return of a new set ofsubmaps237 for a given submap request of a vehicle. The new submap sets237 can be selected from thesubmap database248 and communicated to a specific vehicle via thevehicle interface210. For example,individual vehicles10 may carry (e.g., locally store and use) a limited set of submaps, such as submaps which the vehicle is likely to need over a given duration of time. But if the vehicle traverses the geographic region such that submaps for other localities are needed, thevehicle10 may request additional submaps from thesubmap network service200, and then receive the new submap sets237 based on the vehicle's potential range.
In some variations, thesubmap network service200 also generatessubmap updates233 for vehicles of the geographic region. The submap updates233 for a given submap may correspond to any one of an updated submap, an updated data layer or component of the submap, or a data differential representing the update to a particular submap. As described in greater detail, thesubmap update233 to a given submap may result in a new submap version.
According to some examples, thesubmap network service200 can includesubmap distribution logic246 to interface with thesubmap database248. Thesubmap distribution logic246 may receiveupdate signals249 signifying when new submap sets237 and/orsubmap updates233 are generated. Thesubmap distribution logic246 can trigger the vehicle monitor220 to retrieve new submap sets237 and/orsubmap updates233 from thesubmap database248 based ondistribution input245 communicated from thesubmap distribution logic246. Thedistribution input245 can identify vehicles byvehicle identifier213, by class (e.g., vehicles which last received updates more than one week prior) or other designation. The vehicle monitor220 can determine theselection input235 for a vehicle or set of vehicles based on thedistribution input245. Thesubmap distribution logic246 can generate thedistribution input245 to optimize distribution of updates to individual submaps of thesubmap database248. Thesubmap distribution logic246 may also interface with thevehicle database225 in order to determine which vehicles should receive new submap sets237 and/orsubmap updates233 based on thevehicle identifier213,vehicle location215 andsubmap version217 associated with each vehicle. In this way, thesubmap distribution logic246 can cause the distribution of new submap sets237 and/orsubmap updates233 to multiple vehicles of a given geographic region in parallel, so that multiple vehicles can receive new submap sets237 and/orsubmap updates233 according to a priority distribution that is optimized for one ormore optimization parameters247.
In one implementation,optimization parameter247 can correspond to a proximity parameter that reflects a distance between a current location of a vehicle and an area of the road network where submaps (of different versions) have recently been updated. By way of example, thesubmap distribution logic246 can utilize theoptimization parameter247 to select vehicles (or classes of vehicles) from the geographic region which is prioritized to receive updated submaps. For example, vehicles which receive the series of submap sets237 andupdates233 can include vehicles that are expected to traverse the regions of the road network which have corresponding submap updates sooner, based on their proximity or historical pattern of use.
In variations, theoptimization parameter247 can also de-select or de-prioritize vehicles which, for example, may be too close to an area of the geographic region that corresponds to the new submap sets237 or submap updates. For vehicles that are too close, the de-prioritization may ensure the corresponding vehicle has time to receive and implement an updated submap before the vehicle enters a corresponding area of a road network.
In variations, theoptimization parameters247 for determining selection and/or priority of vehicles receiving new submap sets237 and/or submap updates233. Still further, thesubmap distribution logic246 can utilize the vehicle operational state to determine whether other updates are to be distributed to the vehicle. For example, larger updates (e.g., a relatively large number of new submaps) may require vehicles to be non-operational when the update is received and implemented. Thus, some examples contemplate that at least some new submap sets237 and/orsubmap updates233 can be relatively small, and received and implemented by vehicles which are in an operational state (e.g., vehicles on trip). Likewise, some examples contemplate that larger updates can be delivered to vehicles when those vehicles are in an off-state (e.g., in a low-power state by which updates can be received and implemented on the vehicle).
According to some examples, thesubmap network service200 includes processes that aggregate sensor information from the vehicles that utilize the submaps, in order to determine at least some updates to submaps in use. As an addition or variation, thesubmap network service200 can also receive and aggregate submap information from other sources, such as human driven vehicles, specialized sensor vehicles (whether human operated or autonomous), and/or network services (e.g., pot hole report on Internet site). Thesensor data analysis230 represents logic and processes for analyzingvehicle sensor data243 obtained from vehicles that traverse road segments represented by the submaps of thesubmap database248. With reference toFIG. 1, for example, thevehicle sensor data243 can correspond to output of the roadwaydata aggregation process140.
Thesensor analysis sub-system230 can implement processes and logic to analyze thevehicle sensor data243, and to detect road conditions which can or should be reflected in one or more data layers of a corresponding submap. Additionally, thesensor analysis sub-system230 can generate data sets assensor analysis determinations265 to modify and update the data layers of the submaps to more accurately reflect a current or recent condition of the corresponding road segment.
According to some examples, thesensor analysis sub-system230 implements sensor analysis processes on vehicle sensor data243 (e.g., three-dimensional depth image, stereoscopic image, video, Lidar, etc.). In one example, thesensor analysis sub-system230 may include aclassifier232 to detect and classify objects from thevehicle sensor data243. Additionally, thesensor analysis sub-system230 may include animage recognition process234 to recognize features from the sensor data for the detected objects. Theclassifier232 and theimage recognition process234 can generatesensor analysis determinations265. Thesensor analysis determinations265 can specify a classification of the detected objects, as well as features of the classified object.
Other types of sensor analysis processes may also be used. According to some examples, thesensor analysis sub-system230 includespattern analysis component236 which implements pattern analysis on aggregations ofsensor analysis determinations265 for a particular road segment or area. In some examples, thevehicle data211 links thevehicle sensor data243 to one or more localization coordinate117 (seeFIG. 1), so that thevehicle sensor data243 is associated with a precise location. Thepattern analysis component236 can process thevehicle sensor data243 of multiple vehicles, for a common area (as which may be defined by the localization coordinate117 communicated by the individual vehicles) and over a defined duration of time (e.g., specific hours of a day, specific days of a week, etc.). Thesensor analysis determinations265 can be aggregated, and used to train models that are capable of recognizing objects in sensor data, particularly with respect to geographic regions and/or lighting conditions. Still further, the processes of thesensor analysis sub-system230 can be aggregated to detect temporal or transient conditions, such as time of day when traffic conditions arise. As described with some other examples, thesensor analysis determinations265 can include object detection regarding the formation of instantaneous road conditions (e.g., new road hazard), as well as pattern detection regarding traffic behavior (e.g., lane formation, turn restrictions in traffic intersections, etc.).
Accordingly, thesensor analysis determinations265 of thesensor analysis sub-system230 can include classified objects, recognized features, and traffic patterns. In order to optimize analysis, some variations utilize the feature set223 of a corresponding submap for an area of a road network that is under analysis.
In some examples, abaseline component252 can extractbaseline input257 from the submap of an area from which aggregated sensor analysis data is being analyzed. Thebaseline input257 can include or correspond to the feature set223 of the submaps associated with the area of the road network. Thebaseline component252 can extract, or otherwise identify thebaseline input257 as, for example, a coarse depiction of the road surface, static objects and/or landmarks of the area of the submap. Thebaseline input257 can provide a basis for thesensor data analysis230 to perform classification and recognition, and to identify new and noteworthy objects and conditions.
As an addition or variation, thesubmap comparison component250 includes processes and logic which can comparesensor analysis determinations265 of thesensor analysis sub-system230, with the feature sets223 of the corresponding submaps for the area of a road network. For example, thesubmap comparison component250 can recognize when a classified and/or recognized object/feature output from thesensor analysis sub-system230 is new or different as compared to the feature set223 of the same submap. Thesubmap comparison component250 can compare, for example, objects and features of the vehicle's scene, as well as road surface conditions/features and/or lighting conditions, in order to generate asubmap feature update255 for the corresponding submap.
The update andversioning component244 of thesubmap service manager240 can implement processes to write the updates to thesubmap database248 for the corresponding submap. Additionally, the update andversioning component244 can implement versioning for an updated submap so that a given submap is consistent with submaps of adjacent areas before such submaps are transmitted to the vehicles. In order to version a given submap, some examples provide for the update andversioning component244 to create a copy of the submap to which changes are made, so that two or more versions of a given submap exist in thesubmap database248. This allows for different vehicles to carry different versions of submaps, so that updates to submaps are not required to be distributed to all vehicles at once, but rather can be rolled out progressively, according to logic that can optimize bandwidth, network resources and vehicle availability to receive the updates.
When submaps are updated to carry additional or new data reflecting a change in the area represented by the submap, the change may be discrete to reflect only one, or a specific number of submaps. In some variations, however, thesubmap feature update255 can affect other submaps, such as adjacent submaps (e.g., lane detour). Additionally, the update andversioning component244 can receive submapsystematic updates259 from external sources. The submapsystematic updates259 may affect submap by class, requiring replacement of submaps or re-derivation of data layers. For example, thesystematic updates259 may require vehicles to implement specific algorithms or protocols in order to process a data layer of the submap. In some examples, theversioning component244 can also configure the submaps to carry program files, and/or data to enable vehicles to locate and implement program files, for purpose of processing the updated submaps.
When systematic updates occur to a group or collection of submaps, the update andversioning component244 can create new submap versions for a collection or group of submaps at a time, so that vehicles can receive sets ofnew submaps237 which are updated and versioned to be compatible with the vehicle (e.g., when the vehicle is also updated to process the submap) and with each other. The update andversioning component244 can, for example, ensure that the new (and updated) submaps237 can be stitched by the vehicles into routes that the respective vehicles can use to traverse a road network for a given geographic region, before such updated submaps are communicated to the vehicles.
Submap Data Aggregation
FIG. 3 illustrates a submap data aggregation that stores and links multiple versions of submaps, collectively representing linked roadway segments for a geographic region, according to one or more examples. InFIG. 3, asubmap data aggregation300 may be implemented as, for example, the collection of storedsubmaps105 on anautonomous vehicle10, as described with an example ofFIG. 1. With further reference to an example ofFIG. 3, thesubmap data aggregation300 may logically structure submaps to include a submap definition311,312, as well as one ormore data layers340 which can provide information items such as submap feature set113 (seeFIG. 1). Among other benefits, thesubmap data aggregation300 enables the submap associated with a given road segment to be updated independent of updates to submaps for adjacent road segments. In one example, individual road segments of a road network can be represented by multiple versions of a same submap (e.g., as defined for a particular road segment), with each version including an update or variation that is not present with other versions of the same submap.
According to some examples, each submap definition311,312 can include an association or grouping of data sets (e.g., files in a folder, table with rows and columns, etc.) which collectively identify to a road segment. Each submap definition311,312 may also correlate to a submap version and a submap geographic coordinate set, such as to define the geometric boundary of the submap. The data sets that are associated or grouped to a particular submap may include asemantic label layer303, aroad surface layer304, aperception layer305 and alocalization layer306. The type of data layers which are recited as being included with individual submaps serve as examples only. Accordingly, variations to examples described may include more or fewer data layers, as well as data layers of alternative types.
Thesubmap data aggregation300 of a road segment may be processed by, for example, thesubmap processing component120 contemporaneously with thevehicle10 traversing a corresponding portion of the road segment. The various data layers of individual submaps are processed to facilitate, for example, theAV control system400 in understanding the road segment and the surrounding area. According to examples, thelocalization layer306 includes sensor data, such as imagelets (as captured by, for example, stereoscopic cameras of the vehicle10) arranged in a three-dimensional point cloud, to represent the view of a given scene at any one of multiple possible positions within the submap. Thelocalization layer306 may thus include data items that are stored as raw or processed image data, in order to provide a point of comparison for thelocalization component122 of thesubmap processing component110.
With reference to an example ofFIG. 1, thelocalization component122 may use thelocalization layer306 to perform localization, in order to determine a pinpoint or highly granular location, along with a pose of the vehicle at a given moment in time. According to some examples, thelocalization layer306 may provide a three-dimensional point cloud of imagelets and/or surfels. Depending on the implementation, the imagelets or surfels may represent imagery captured through Lidar, stereoscopic cameras, a combination of two-dimensional cameras and depth sensors, or other three-dimensional sensing technology. In some examples, thelocalization component122 can determine precise location and pose for the vehicle by comparing image data, as provided from thecurrent sensor state93 of the vehicle, with the three-dimensional point cloud of thelocalization layer306.
In some examples, theperception layer305 can include image data, labels or other data sets which mark static or persistent objects. With reference to an example ofFIG. 1, theperception component124 may use theperception layer305 to subtract objects identified through theperception layer305 from a scene as depicted by the current sensor state493 (seeFIG. 1). In this way, thesubmap processing component120 can use theperception layer305 to detect dynamic objects. Among other operations, thevehicle10 can use theperception layer305 to detect dynamic objects for purpose of avoiding collisions and/or planning trajectories within a road segment of the particular submap.
Theroad surface layer304 can include, for example, sensor data representations and/or semantic labels that are descriptive of the road surface. Theroad surface layer304 can identify, for example, the structure and orientation of the roadway, the lanes of the roadway, and the presence of obstacles which may have previously been detected on the roadway, predictions of traffic flow patterns, trajectory recommendations and/or various other kinds of information.
Thelabel layer303 can identify semantic labels for the roadway and area surrounding the road segment of the submap. This may include labels that identify actions needed for following signage or traffic flow.
The individual submaps311,312 may also includeorganization data302, which can identify a hierarchy or dependency as between individual data layers of the submap. For example, in some examples, thelocalization layer306 and theperception layer305 may be dependent on theroad surface layer304, as the data provided by the respective localization and perception layers306,305 would be dependent on, for example, a condition of the road surface.
In an example ofFIG. 3, the submap versions for a common road segment are distinguished through lettering (312A-312C). Each submap version may be distinguished from other versions by, for example, the processing logic to be used with the submap, the submap data structure (e.g., the interdependencies of the data layers), the format of the data layers, and/or the contents of the respective data layers. In some examples, each submap311,312 may utilize models, algorithms, or other logic (shown as model315) in connection with processing data for each of the data layers. The logic utilized to process data layers within a submap may differ. Additionally, different logic may be utilized for processing data layers of the submap for different purposes. Accordingly, the data layers of the individual submaps may be formatted and/or structured so as to be optimized for specific logic.
According to some examples, the structure of thesubmap data aggregation300 permits individual submaps to be updated independent of other submaps (e.g., submaps of adjacent road segments). For example, individual submaps for a geographic region can be updated selectively based on factors such as the occurrence of events which affect one submap over another. When submaps are updated, the submap in its entirety may be replaced by an updated submap. As an addition or variation, components of the submap (e.g., a data layer) can be modified or replaced independent of other components of the submap. The updating of the submap can change, for example, information conveyed in one or more data layers (e.g.,perception layer305 reflects a new building,road surface layer304 identifies road construction, etc.), the structure or format of the data layers (e.g., such as to accommodate new or updated logic of thevehicle10 for processing the data layer), the organizational data (e.g., a submap may alter theperception layer305 to be dependent on the localization layer306), or the type and availability of data layers (e.g., more or fewer types of semantic labels for the label layer303).
According to some examples, each submap includes an identifier that includes a versioning identifier (“versioning ID325”). When a submap for a particular road segment is updated, a new version of a submap is created, and the identifier of the submap is changed to reflect a new version. In one implementation, the version identifier can be mapped to a record that identifies the specific component version and/or date of update. In another implementation, the versioning ID325 is encoded to reflect the mapping of the updated submap received for that version of the submap.
The data sets that are associated or grouped to a particular submap may also include a connector data set308 (e.g., edge) that links the particular version of the submap to a compatible version of the submap for the adjacent road segment. Each connector data set308 may link versions for submaps of adjacent road segments using logic or encoding that identifies and matches compatible submap updates. In one implementation, the connector data sets308 use the versioning ID325 of the individual submaps to identify compatibility amongst adjacent submaps and versions thereof. The logic utilized in forming the connector data sets308 can account for the type of nature of the update, such as the particular data layer or component that is updated with a particular submap version. In some examples, when the update to the submap affects, for example, an algorithm or model for determining the interdependent data sets, the versioning ID325 can reflect compatibility with only those submaps that utilize the same algorithm ormodel315. When the update to the submap affects the structure of a data layer such aslocalization layer306 orperception layer305, the versioning of the data layer may reflect, for example, that the specific submap version is compatible with multiple other submap versions which provide for the same data layer structures.
With reference to an example ofFIG. 1, the connector data sets308 may cause the submap retrieval andprocessing components110,120 to retrieve and/or process a particular submap version, based on compatibility to thecurrent submaps125 that are processed. In this way, a vehicle that utilizes a particular submap version can ensure that the submap of the vehicle's next road segment is of a compatible version. Among other benefits, the use of connector data sets308 enable updates (e.g., such as from the submap network service200) to be generated for numerous vehicles, and then distributed on a roll-out basis, based on opportunity and availability of individual vehicles to receive updates. The roll out of the submaps can be performed so that vehicles, which may receive submap updates early or late in the process, can have a series of compatible submaps for use when traversing the road network of a given region.
System Description
FIG. 4 illustrates an example of a control system for an autonomous vehicle. In an example ofFIG. 4, acontrol system400 is used to autonomously operate avehicle10 in a given geographic region for a variety of purposes, including transport services (e.g., transport of humans, delivery services, etc.). In examples described, an autonomously driven vehicle can operate without human control. For example, in the context of automobiles, an autonomously driven vehicle can steer, accelerate, shift, brake and operate lighting components. Some variations also recognize that an autonomous-capable vehicle can be operated either autonomously or manually.
In one implementation, theAV control system400 can utilize specific sensor resources in order to intelligently operate thevehicle10 in most common driving situations. For example, theAV control system400 can operate thevehicle10 by autonomously steering, accelerating and braking thevehicle10 as the vehicle progresses to a destination. Thecontrol system400 can perform vehicle control actions (e.g., braking, steering, accelerating) and route planning using sensor information, as well as other inputs (e.g., transmissions from remote or local human operators, network communication from other vehicles, etc.).
In an example ofFIG. 4, theAV control system400 includes a computer or processing system which operates to process sensor data that is obtained on the vehicle with respect to a road segment that the vehicle is about to drive on. The sensor data can be used to determine actions which are to be performed by thevehicle10 in order for the vehicle to continue on a route to a destination. In some variations, theAV control system400 can include other functionality, such as wireless communication capabilities, to send and/or receive wireless communications with one or more remote sources. In controlling the vehicle, theAV control system400 can issue instructions and data, shown ascommands85, which programmatically controls various electromechanical interfaces of thevehicle10. Thecommands85 can serve to control operational aspects of thevehicle10, including propulsion, braking, steering, and auxiliary behavior (e.g., turning lights on).
Examples recognize that urban driving environments present significant challenges to autonomous vehicles. In particular, the behavior of objects such as pedestrians, bicycles, and other vehicles can vary based on geographic region (e.g., country or city) and locality (e.g., location within a city). Additionally, examples recognize that the behavior of such objects can vary based on various other events, such as time of day, weather, local events (e.g., public event or gathering), season, and proximity of nearby features (e.g., crosswalk, building, traffic signal). Moreover, the manner in which other drivers respond to pedestrians, bicyclists and other vehicles varies by geographic region and locality.
Accordingly, examples provided herein recognize that the effectiveness of autonomous vehicles in urban settings can be limited by the limitations of autonomous vehicles in recognizing and understanding how to process or handle the numerous daily events of a congested environment.
Theautonomous vehicle10 can be equipped with multiple types ofsensors401,403,405, which combine to provide a computerized perception of the space and environment surrounding thevehicle10. Likewise, theAV control system400 can operate within theautonomous vehicle10 to receive sensor data from the collection ofsensors401,403,405, and to control various electromechanical interfaces for operating the vehicle on roadways.
In more detail, thesensors401,403,405 operate to collectively obtain a complete sensor view of thevehicle10, and further to obtain information about what is near the vehicle, as well as what is near or in front of a path of travel for the vehicle. By way of example, thesensors401,403,405 include multiple sets of cameras sensors401 (video camera, stereoscopic pairs of cameras or depth perception cameras, long range cameras),remote detection sensors403 such as provided by radar or Lidar, proximity ortouch sensors405, and/or sonar sensors (not shown).
Each of thesensors401,403,405 can communicate with, or utilize acorresponding sensor interface410,412,414. Each of the sensor interfaces410,412,414 can include, for example, hardware and/or other logical component which is coupled or otherwise provided with the respective sensor. For example, thesensors401,403,405 can include a video camera and/or stereoscopic camera set which continually generates image data of an environment of thevehicle10. As an addition or alternative, the sensor interfaces410,412,414 can include a dedicated processing resource, such as provided with a field programmable gate array (“FPGA”) which receives and/or processes raw image data from the camera sensor.
In some examples, the sensor interfaces410,412,414 can include logic, such as provided with hardware and/or programming, to processsensor data99 from arespective sensor401,403,405. The processedsensor data99 can be outputted assensor data411. As an addition or variation, theAV control system400 can also include logic for processing raw orpre-processed sensor data99.
According to one implementation, thevehicle interface subsystem90 can include or control multiple interfaces to control mechanisms of thevehicle10. Thevehicle interface subsystem90 can include apropulsion interface92 to electrically (or through programming) control a propulsion component (e.g., a gas pedal), asteering interface94 for a steering mechanism, abraking interface96 for a braking component, and lighting/auxiliary interface98 for exterior lights of the vehicle. Thevehicle interface subsystem90 and/orcontrol system400 can include one ormore controllers84 which receive one ormore commands85 from theAV control system400. Thecommands85 can includeroute information87 and one or moreoperational parameters89 which specify an operational state of the vehicle (e.g., desired speed and pose, acceleration, etc.).
The controller(s)84 generatecontrol signals419 in response to receiving thecommands85 for one or more of the vehicle interfaces92,94,96,98. Thecontrollers84 use thecommands85 as input to control propulsion, steering, braking and/or other vehicle behavior while theautonomous vehicle10 follows a route. Thus, while thevehicle10 may follow a route, the controller(s)84 can continuously adjust and alter the movement of the vehicle in response receiving a corresponding set ofcommands85 from theAV control system400. Absent events or conditions which affect the confidence of the vehicle in safely progressing on the route, theAV control system400 can generateadditional commands85 from which the controller(s)84 can generate various vehicle control signals419 for the different interfaces of thevehicle interface subsystem90.
According to examples, thecommands85 can specify actions that are to be performed by thevehicle10. The actions can correlate to one or multiple vehicle control mechanisms (e.g., steering mechanism, brakes, etc.). Thecommands85 can specify the actions, along with attributes such as magnitude, duration, directionality or other operational characteristic of thevehicle10. By way of example, thecommands85 generated from theAV control system400 can specify a relative location of a road segment which theautonomous vehicle10 is to occupy while in motion (e.g., change lanes, move to center divider or towards shoulder, turn vehicle etc.). As other examples, thecommands85 can specify a speed, a change in acceleration (or deceleration) from braking or accelerating, a turning action, or a state change of exterior lighting or other components. Thecontrollers84 translate thecommands85 intocontrol signals419 for a corresponding interface of thevehicle interface subsystem90. The control signals419 can take the form of electrical signals which correlate to the specified vehicle action by virtue of electrical characteristics that have attributes for magnitude, duration, frequency or pulse, or other electrical characteristics.
In an example ofFIG. 4, theAV control system400 includesSIPS100, includinglocalization component122 andperception component124. TheAV control system400 may also includeroute planner422,motion planning component424,event logic474,prediction engine426, and avehicle control interface428. Thevehicle control interface428 represents logic that communicates with thevehicle interface subsystem90, in order to issuecommands85 that control the vehicle with respect to, for example, steering, lateral and forward/backward acceleration and other parameters. Thevehicle control interface428 may issue thecommands85 in response to determinations of various logical components of theAV control system400.
In an example ofFIG. 4, theSIPS100 is shown as a sub-component of theAV control system400. In variations, the components and functionality of theSIPS100 can be distributed with other components in the vehicle. TheSIPS100 can utilize acurrent sensor state93 of thevehicle10, as provided bysensor data411. Thecurrent sensor state93 can include raw or processed sensor data obtained from Lidar, stereoscopic imagery, and/or depth sensors. As described with an example ofFIG. 1, theSIPS100 may provide localization output121 (including localization coordinate117 and pose119, as shown with an example ofFIG. 1) to one or more components of theAV control system400. Thelocalization output121 can correspond to, for example, a position of the vehicle within a road segment. Thelocalization output121 can be specific in terms of identifying, for example, any one or more of a driving lane that thevehicle10 is using, the vehicle's distance from an edge of the road, the vehicle's distance from the edge of the driving lane, and/or a distance of travel from a point of reference for the particular submap. In some examples, thelocalization output121 can determine the relative location of thevehicle10 within a road segment, as represented by a submap, to within less than a foot, or to less than a half foot.
Additionally, theSIPS100 may signalperception output129 to one or more components of theAV control system400. Theperception output129 may utilize, for example, the perception layer305 (seeFIG. 3) to subtract objects which are deemed to be persistent from thecurrent sensor state93 of the vehicle. Objects which are identified through theperception component124 can be perceived as being static or dynamic, with static objects referring to objects which are persistent or permanent in the particular geographic region. Theperception component124 may, for example, generateperception output129 that is based onsensor data411 which exclude predetermined static objects. Theperception output129 can correspond to interpreted sensor data, such as (i) image, sonar or other electronic sensory-based renderings of the environment, (ii) detection and classification of dynamic objects in the environment, and/or (iii) state information associated with individual objects (e.g., whether object is moving, pose of object, direction of object). Theperception component124 can interpret thesensor data411 for a given sensor horizon. In some examples theperception component124 can be centralized, such as residing with a processor or combination of processors in a central portion of the vehicle. In other examples, theperception component124 can be distributed, such as onto the one or more of the sensor interfaces410,412,414, such that the outputtedsensor data411 can include perceptions.
Themotion planning component424 can process input, which includes thelocalization output121 and theperception output129, in order to determine aresponse trajectory425 which the vehicle may take to avoid a potential hazard. Themotion planning component424 includes logic to determine one or more trajectories, or potential trajectories of moving objects in the environment of the vehicle. When dynamic objects are detected, themotion planning component424 determines aresponse trajectory425, which can be directed to avoiding a collision with a moving object. In some examples, theresponse trajectory425 can specify an adjustment to the vehicle's speed (e.g., vehicle in front slowing down) or to the vehicle's path (e.g., swerve or change lanes in response to bicyclist). Theresponse trajectory425 can be received by thevehicle control interface428 in advancing the vehicle forward. In some examples, themotion planning component424 associates a confidence value with theresponse trajectory425, and thevehicle control interface428 may implement theresponse trajectory425 based on the associated confidence value. As described below, themotion planning component424 may also characterize a potential event (e.g., by type, severity), and/or determine the likelihood that a collision or other event may occur unless aresponse trajectory425 is implemented.
In some examples, themotion planning component424 may include aprediction engine426 to determine one or more types of predictions, which themotion planning component424 can utilize in determining theresponse trajectory425. In some examples, theprediction engine426 may determine a likelihood that a detected dynamic object will collide with the vehicle, absent the vehicle implementing a response trajectory to avoid the collision. As another example, theprediction engine426 can identify potential points of interference or collision by unseen or occluded objects on a portion of the road segment in front of the vehicle. Theprediction engine426 may also be used to determine a likelihood as to whether a detected dynamic object can collide or interfere with thevehicle10.
In some examples, themotion planning component424 includesevent logic474 to detect conditions or events, such as may be caused by weather, conditions or objects other than moving objects (e.g., potholes, debris, road surface hazard, traffic, etc.). Theevent logic474 can use the vehicle'ssensor state93,localization output121,perception output129 and/or third-party information to detect such conditions and events. Thus, theevent logic474 detects events which, if perceived correctly, may in fact require some form of evasive action or planning. In some examples,response action425 may include input for the vehicle to determine anew vehicle trajectory479, or to adjust an existingvehicle trajectory479, either to avoid or mitigate a potential hazard. By way of example, thevehicle response trajectory425 can cause thevehicle control interface428 to implement a slight or sharp vehicle avoidance maneuver, using a steering control mechanism and/or braking component.
Theroute planner422 can determine aroute421 for a vehicle to use on a trip. In determining theroute421, theroute planner422 can utilize a map database, such as provided over a network through amap service399. Based on input such as destination and current location (e.g., such as provided through a satellite navigation component), theroute planner422 can select one or more route segments that collectively form a path of travel for theautonomous vehicle10 when the vehicle in on a trip. In one implementation, theroute planner422 can determine route input473 (e.g., route segments) for aplanned route421, which in turn can be communicated to thevehicle control428.
In an example ofFIG. 4, thevehicle control interface428 includes components to operate the vehicle on a selectedroute421, and to maneuver the vehicle based on events which occur in the vehicle's relevant surroundings. Thevehicle control interface428 can include aroute following component467 to receive aroute input473 that corresponds to the selectedroute421. Based at least in part on theroute input473, theroute following component467 can determine a route trajectory component475 that corresponds to a segment of the selectedroute421. Atrajectory following component469 can determine or select the vehicle'strajectory479 based on the route trajectory475 and input from the motion planning component424 (e.g.,response trajectory425 when an event is detected). In a scenario where the vehicle is driving autonomously without other vehicles or objects, the route trajectory component475 may form a sole or primary basis of thevehicle trajectory479. When dynamic objects or events are detected for avoidance planning by themotion planning component424, thetrajectory following component469 can select or determine an alternative trajectory based on theresponse trajectory425. For example, theresponse trajectory425 can provide an alternative to the route trajectory475 for a short duration of time, until an event is avoided. The selection and/or use of the response trajectory425 (or response trajectories) can be based on the confidence, severity and/or type of object or event detected by themotion planning component424. Additionally, the selection and/or use of theresponse trajectory425 can be weighted based on the confidence value associated with the determinations, as well as the severity, type, and/or likelihood of occurrence. Thevehicle control interface428 can include acommand interface488, which uses thevehicle trajectory479 to generate thecommands85 as output to control components of thevehicle10. The commands can further implement driving rules and actions based on various context and inputs.
Hardware Diagrams
FIG. 5 is a block diagram of a vehicle system on which an autonomous vehicle control system may be implemented. According to some examples, avehicle system500 can be implemented using a set ofprocessors504,memory resources506,multiple sensors interfaces522,528 (or interfaces for sensors) and location-aware hardware such as shown bysatellite navigation component524. In an example shown, thevehicle system500 can be distributed spatially into various regions of a vehicle. For example, aprocessor bank504 with accompanyingmemory resources506 can be provided in a vehicle trunk. The various processing resources of thevehicle system500 can also include distributedsensor processing components534, which can be implemented using microprocessors or integrated circuits. In some examples, the distributedsensor logic534 can be implemented using field-programmable gate arrays (FPGA).
In an example ofFIG. 5, thevehicle system500 further includes multiple communication interfaces, including a real-time communication interface518 and anasynchronous communication interface538. Thevarious communication interfaces518,538 can send and receive communications to other vehicles, central services, human assistance operators, or other remote entities for a variety of purposes. In the context ofFIG. 1 andFIG. 4, for example, theSIPS100 and theAV control system400 can be implemented usingvehicle system500, as with an example ofFIG. 5. In one implementation, the real-time communication interface518 can be optimized to communicate information instantly, in real-time to remote entities (e.g., human assistance operators). Accordingly, the real-time communication interface518 can include hardware to enable multiple communication links, as well as logic to enable priority selection.
Thevehicle system500 can also include a local communication interface526 (or series of local links) to vehicle interfaces and other resources of thevehicle10. In one implementation, the local communication interface526 provides a data bus or other local link to electro-mechanical interfaces of the vehicle, such as used to operate steering, acceleration and braking, as well as to data resources of the vehicle (e.g., vehicle processor, OBD memory, etc.). The local communication interface526 may be used to signalcommands535 to the electro-mechanical interfaces in order to control operation of the vehicle.
Thememory resources506 can include, for example, main memory, a read-only memory (ROM), storage device, and cache resources. The main memory ofmemory resources506 can include random access memory (RAM) or other dynamic storage device, for storing information and instructions which are executable by theprocessors504.
Theprocessors504 can execute instructions for processing information stored with the main memory of thememory resources506. The main memory can also store temporary variables or other intermediate information which can be used during execution of instructions by one or more of theprocessors504. Thememory resources506 can also include ROM or other static storage device for storing static information and instructions for one or more of theprocessors504. Thememory resources506 can also include other forms of memory devices and components, such as a magnetic disk or optical disk, for purpose of storing information and instructions for use by one or more of theprocessors504.
One or more of the communication interfaces518 can enable the autonomous vehicle to communicate with one or more networks (e.g., cellular network) through use of anetwork link519, which can be wireless or wired. Thevehicle system500 can establish and usemultiple network links519 at the same time. Using thenetwork link519, thevehicle system500 can communicate with one or more remote entities, such as network services or human operators. According to some examples, thevehicle system500 stores submaps505, as well as submap control system instructions507 for implementing the SIPS100 (seeFIG. 1). Thevehicle system500 may also store AV control system instructions509 for implementing the AV control system400 (seeFIG. 4). During runtime (e.g., when the vehicle is operational), one or more of theprocessors504 execute the submap processing instructions507, including the prediction engine instructions515, in order to implement functionality such as described with an example ofFIG. 1.
In operating theautonomous vehicle10, the one ormore processors504 can execute AC control system instructions509 to operate the vehicle. Among other control operations, the one ormore processors504 may access data from a road network525 in order to determine a route, immediate path forward, and information about a road segment that is to be traversed by the vehicle. The road network can be stored in thememory506 of the vehicle and/or received responsively from an external source using one of the communication interfaces518,538. For example, thememory506 can store a database of roadway information for future use, and theasynchronous communication interface538 can repeatedly receive data to update the database (e.g., after another vehicle does a run through a road segment).
FIG. 6 is a block diagram of a network service or computer system on which some embodiments may be implemented. According to some examples, acomputer system600, such as shown with an example ofFIG. 2, may be used to implement a submap service or other remote computer system, such as shown with an example ofFIG. 2.
In one implementation, thecomputer system600 includes processing resources, such as one ormore processors610, amain memory620, a read-only memory (ROM)630, astorage device640, and acommunication interface650. Thecomputer system600 includes at least oneprocessor610 for processing information and themain memory620, such as a random access memory (RAM) or other dynamic storage device, for storing information and instructions to be executed by theprocessor610. Themain memory620 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by theprocessor610. Thecomputer system600 may also include theROM630 or other static storage device for storing static information and instructions for theprocessor610. Thestorage device640, such as a magnetic disk or optical disk, is provided for storing information and instructions. For example, thestorage device640 can correspond to a computer-readable medium that stores instructions for maintaining and distributing submaps to vehicles, such as described with examples ofFIG. 2 andFIG. 8. In such examples, thecomputer system600 can store a database ofsubmaps605 for a geographic region, with each submap being structured in accordance with one or more examples described below. Thememory620 may also store instructions for managing and distributing submaps (“submap instructions615”). For a given geographic region,individual submaps605 may represent road segments and their surrounding area. The processor604 may execute thesubmap instructions615 in order to perform any of the methods such as described withFIG. 8.
Thecommunication interface650 can enable thecomputer system600 to communicate with one or more networks680 (e.g., cellular network) through use of the network link (wirelessly or using a wire). Using the network link, thecomputer system600 can communicate with a plurality of user-vehicles, using, for example, wireless network interfaces which may be resident on the individual vehicles.
Thecomputer system600 can also include adisplay device660, such as a cathode ray tube (CRT), an LCD monitor, or a television set, for example, for displaying graphics and information to a user. Aninput mechanism670, such as a keyboard that includes alphanumeric keys and other keys, can be coupled to thecomputer system600 for communicating information and command selections to theprocessor610. Other non-limiting, illustrative examples of theinput mechanisms670 include a mouse, a trackball, touch-sensitive screen, or cursor direction keys for communicating direction information and command selections to theprocessor610 and for controlling cursor movement on thedisplay660.
Some of the examples described herein are related to the use of thecomputer system600 for implementing the techniques described herein. According to one example, those techniques are performed by thecomputer system600 in response to theprocessor610 executing one or more sequences of one or more instructions contained in themain memory620. Such instructions may be read into themain memory620 from another machine-readable medium, such as thestorage device640. Execution of the sequences of instructions contained in themain memory620 causes theprocessor610 to perform the process steps described herein. In alternative implementations, hard-wired circuitry may be used in place of or in combination with software instructions to implement examples described herein. Thus, the examples described are not limited to any specific combination of hardware circuitry and software.
Methodology
FIG. 7 illustrates an example method for operating a vehicle using a submap system, according to one or more examples. According to examples, the method such as described withFIG. 7 may be implemented using components such as described withFIGS. 1, 4 and 5. Accordingly, in describing an example ofFIG. 7, reference may be made to elements of prior examples in order to illustrate a suitable component for performing a step or sub-step being described.
In one implementation, the autonomous vehicle retrieves a series of submaps (or one or more submaps) from a collection of submaps that are stored in memory (710). The series of submaps may be retrieved for use in controlling thevehicle10 on a trip. As described with other examples, each submap may represent an area of a road network on which the vehicle is expected to travel. According to some examples, the individual submaps of the collection can each include (i) an identifier from the collection, (ii) multiple data layers, with each data layer representing a feature set of the area of the road network of that submap, and (iii) a connector data set to link the submap with another submap that represents an adjacent area to the area of the road network of that submap. By way of example, the retrieval operation may be performed in connection with the vehicle initiating a trip. In such an example, the retrieval operation is performed to obtain submaps for thevehicle10 prior to the vehicle progressing on the trip to the point where a submap is needed. In variations, thevehicle10 may retrieve submaps in anticipation of use at a future interval.
In some examples, the retrieval operation is local (712). For example, thesubmap retrieval process110 may retrieve submaps from a collection ofsubmaps105 that are stored with memory resources of the vehicle. In variations, thesubmap retrieval process110 may retrieve submaps from a remote source (714), such as thesubmap network service200, or another vehicle. For example, thevehicle10 may communicate wirelessly (e.g., using a cellular channel) with thesubmap network service200, or with anothervehicle10 which may have submaps to share.
Thevehicle10 can be controlled in its operations during the trip using the retrieved submaps (720). For example, thesubmap processing component120 of theSIPS100 can extract data from the various data layers of each submap, for use as input to theAV control system400. TheAV control system400 can, for example, utilize the submap to navigate, plan trajectories, determine and classify dynamic objects, determine response actions, and perform other operations involved in driving across a segment of a road network that corresponds to a submap.
FIG. 8 illustrates an example method for distributing mapping information to vehicles of a geographic region for use in autonomous driving, according to one or more examples. A method such as described withFIG. 8 may be implemented using components such as described withFIG. 2 andFIG. 6. Accordingly, in describing an example ofFIG. 7, reference may be made to elements of prior examples in order to illustrate a suitable component for performing a step or sub-step being described.
With reference to an example ofFIG. 8, thesubmap network service200 may maintain a series of submaps which are part of a submap database (810). In one implementation, thesubmap network service200 may utilize servers and network resources to maintain a library of submaps for a geographic region.
In some variations, thesubmap network service200 can update submaps individually and independent of other submaps (820). When such updates occur, thesubmap network service200 can distribute updated submaps to a population of vehicles in the pertinent geographic region. Each vehicle may then receive or store an updated set of submaps. The ability to update and distribute submaps individually, apart from a larger map of the geographic region, facilitates thesubmap network service200 in efficiently and rapidly managing the submap library, and the collections of submaps which can be repeatedly communicated to user-vehicles of the pertinent geographic region.
As described with examples ofFIG. 1 andFIG. 3, submaps may be versioned to facilitate partial distribution to vehicles of a population (822). Versioning submaps facilitate thesubmap network service200 in progressively implementing global updates to the submaps of a geographic region. Additionally, versioning submaps enables user-vehicles to operate utilizing submaps that differ by content, structure, data types, or processing algorithm.
As an addition or alternative, thesubmap network service200 may distribute a series of submaps to multiple vehicles of the geographic region (830). According to some examples, the distribution of submaps may be done progressively, to vehicles individually or in small sets, rather than to all vehicles that are to receive the submaps at one time. The versioning of submaps may also facilitate the distribution of new submaps, by for example, ensuring that vehicles which receive new submaps early or late in the update process can continue to operate using compatible submap versions.
FIG. 9 illustrates an example method for providing guidance to autonomous vehicles. A method such as described with an example ofFIG. 9 may be implemented by, for example, a network computer system, such as described with submap network service200 (seeFIG. 2), in connection with information provided by autonomous vehicles, such as described withFIG. 1 andFIG. 4. Accordingly, reference may be made to elements of other examples for purpose of illustrating a suitable component or element for performing a step or sub-step being described.
According to an example, sensor information is obtained from multiple instances of at least one autonomous vehicle driving through or past a road segment which undergoes an event or condition that affects traffic or driving behavior (910). Theautonomous vehicle10 may, for example, be in traffic to encounter the causal condition or event. Alternatively, theautonomous vehicle10 may capture sensor data of other vehicles that are encountering the condition or event (e.g., the autonomous vehicle may travel in an alternative direction). The sensor information may correspond to image data (e.g., two-dimensional image data, three-dimensional image data, Lidar, radar, sonar, etc.). The sensor information may be received and processed by, for example, thesubmap network service200.
The submap network service may use the sensor information to identify a deviation from a normal or permitted driving behavior amongst a plurality of vehicles that utilize the road segment (920). The deviation may be identified as an aggregation of incidents, where, for example, the driving behavior of vehicles in the population deviate form a normal or permitted behavior. The past incidents can be analyzed through, for example, statistical analysis (e.g., development of histograms), so that future occurrences of the deviations may be predicted. By way of example, a deviation may correspond to an ad-hoc formation of a turn lane. In some cases, the deviation may be a technical violation of law or driving rules for the geographic region. For example, the deviation may correspond to a turn restriction that is otherwise permissible, but not feasible to perform given driving behavior of other vehicles. As another example, the deviation may correspond to a reduction in speed as a result of traffic formation, such as by other vehicles anticipating traffic. As another example, the deviation may include the formation of a vehicle stopping space that other vehicles utilize, but which is otherwise impermissible. Alternatively, the deviation may include elimination of a vehicle parking space that is permissible, but not feasible to access given driving behavior of other vehicles.
In order to identify the deviation, thesubmap data analysis230 may extractvehicle sensor data243 transmitted form a vehicle, then plot the localization position of the vehicle to determine when and where theautonomous vehicle10 occupied a lane that crossed a midline of the road, or a shoulder on the side of the road. As an alternative or variation, thesubmap data analysis230 may perform image (or other sensor data) analysis to identify, for example, vehicles standing still or conglomerating in places of the road network to block access for turning or parking spots.
In some examples, thesubmap network service200 determines the deviation as being a pattern behavior (922). The pattern behavior may be temporal, such as reoccurring on specific days of week or time.
In variations, the behavior may be responsive to certain events or conditions (924). For example, snowfall or rain may be observed to cause vehicles on a road segment to drive towards a center divider.
The instructions set for one or more autonomous vehicles may be updated to enable or facilitate the autonomous vehicles to implement the deviation (930). In some examples, the instructions may be updated by inclusion of parameters, sensor data or other information in the submap that encompasses the location of the deviation. As an addition or variation, the instructions may be updated by relaxing driving rules for theautonomous vehicle10 to permit driving behavior which would otherwise be considered impermissible or constrained.
In some examples, the submap may include parameters or instructions to indicate when the deviation is anticipated, or when alternative driving behavior to account for the deviation is permissible (932). For example, the deviation may be patterned in time, and the submap for the vehicle may weight against the vehicle implementing the deviation unless within time slots when the driving behavior of other vehicles warrants the deviation.
FIG. 10 illustrates an example sensor processing sub-system for an autonomous vehicle, according to one or more embodiments. According to some examples, asensor processing sub-system1000 may be implemented as part of an AV control system400 (seeFIG. 4). In some examples, thesensor processing subsystem1000 can be implemented as the submap information processing system100 (e.g., seeFIGS. 1 and 4). In variations, thesensor processing subsystem1000 may be implemented independent of submaps.
According to an example ofFIG. 10, thesensor processing subsystem1000 includes alocalization component1024, aperception component1022, andimage processing logic1038. Thelocalization component1024 and/or theperception component1022 may each utilize theimage processing logic1038 in order to determine a respective output. In particular, thelocalization component1024 and theperception component1022 may each compare currentsensor state data493, includingcurrent image data1043 captured by onboard cameras of the vehicle, with prior sensor state data1029. Thecurrent image data1043 may include, for example, passive image data, such as provided with depth images generated from pairs of stereoscopic cameras and two dimensional images (e.g., long range cameras). Thecurrent image data1043 may also include active image data, such as generated by Lidar, sonar, or radar.
In some examples such as described withFIG. 1 throughFIG. 3, prior sensor state1029 may be provided through use of submaps, which may carry or otherwise provide data layers corresponding to specific types of known sensor data sets (or features) for a given area of a road segment. In variations, prior sensor state1029 may be stored and/or accessed in another form or data stricture, such as in connection with latitude and longitude coordinates provided by a satellite navigation component.
Thelocalization component1024 may determine a localization output1021 based on comparison of thecurrent sensor state493 and prior sensor state1029. The localization output1021 may include a location coordinate1017 and apose1019. In some examples, the location coordinate1017 may be with respect to a particular submap which thevehicle10 is deemed to be located in (e.g., such as when thevehicle10 is on trip). Thepose1019 may also be with respect to a predefined orientation, such as the direction of the road segment.
According to some examples, thelocalization component1024 determines the localization output1021 using the prior sensor state1029 of an area of the vehicle. The prior sensor state1029 may be distributed as elements that reflect a sensor field of view about a specific location where the sensor data was previously obtained. When distributed about the sensor field of view, the sensor information provided by the prior sensor state1029 can be said to be in the form of a point cloud1035. In some examples, the point cloud1035 of prior sensor information may be substantially two-dimensional, spanning radially (e.g., 180 degrees, 360 degrees about, for example, a reference location that is in front of the vehicle). In variations, the point cloud1035 of prior sensor information may be three-dimensional, occupying a space in front and/or along the sides of the vehicle. Still further, in other variations, the point cloud1035 of prior sensor information may extend in two or three dimensions behind the vehicle. For example, the prior sensor state1029 may be provided as part of a submap that includes a layer of imagelets arranged in a point cloud. The imagelets may include, for example, passive image data sets, or image sets collected from a Lidar component of the vehicle. The individual imagelets of the prior sensor state1029 may each be associated with a precise coordinate or position, corresponding to a location of sensor devices that captured sensor information of the prior state information. In some variations, the imagelets of prior sensor state1029 may also reference a pose, reflecting an orientation of the sensor device that captured the prior data. In some variations, the imagelets may also be associated with labels, such as semantic labels which identify a type or nature of an object depicted in the imagelet, or a classification (e.g., imagelet depicts a static object). Still further, the imagelets may be associated with a priority or weight, reflecting, for example, a reliability or effectiveness of the imagelet for purpose of determining the localization output1021.
In some variations, the prior sensor state1029 may include multiple point clouds135 for different known and successive locations of a road segment, such as provided by a submaps. For example, the prior sensor state1029 may include a point cloud1035 of prior sensor information for successive locations of capture, along a roadway segment, where the successive locations are an incremental distance (e.g., 1 meter) apart.
As an addition or alternative, the prior sensor state1029 may include multiple different point clouds1035 that reference a common location of capture, with variations amongst the point clouds1035 accounting for different lighting conditions (e.g., lighting conditions such as provided by weather, time of day, season). In such examples, thelocalization component1024 may include pointcloud selection logic1032 to select the point cloud1035 of prior sensor information to reflect a best match with a current lighting condition, so that a subsequent comparison between the prior sensor state1029 and thecurrent sensor state493 does not result in inaccuracies resulting from differences in lighting condition. For example, with passive image data, the variations amongst the point clouds1035 of prior sensor information may account for lighting variations resulting from time of day, weather, or season.
In some examples, the pointcloud selection logic1032 may select the appropriate point cloud1035 of prior sensor state information based on an approximate location of thevehicle10. For example, when thevehicle10 initiates a trip, the pointcloud selection logic1032 may select a point cloud1035 of prior sensor information based on a last known location of the vehicle. Alternatively, the pointcloud selection logic1032 may select the point cloud1035 of prior sensor information based on an approximate location as determined by a satellite navigation component, or through tracking of velocity and time (and optionally acceleration). As an addition or alternative, the pointcloud selection logic1032 may select the appropriate point cloud based on contextual information, which identifies or correlates to lighting conditions. Depending on implementation, the selected point cloud1035 may carry less than 5 imagelets, less than 10 imagelets, or tens or hundreds of imagelets.
Thelocalization component1024 may compare thecurrent image data1043 of thecurrent sensor state493 with that of the selected point cloud1035 in order to determine the localization output1021. In performing the comparison, thelocalization component1024 may generate, or otherwise create, a fan or field of view for the current sensor information, reflecting the scene as viewed from the vehicle at a particular location. Thelocalization component1024 may utilize theimage processing logic1038 to perform image analysis to match features of the scene with imagelets of the selected point cloud1035. Thelocalization component1024 may use theimage processing logic1038 to match portions of thecurrent image data1043 with individual imagelets of the selected point cloud1035. In some examples, multiple matching imagelets are determined for purpose of determining the localization output1021. For example, in some examples, 3 to 5 matching imagelets are identified and then used to determine the localization output1021. In variations, more than 10 matching imagelets are identified and used for determining the localization output1021.
In some examples, thelocalization component1024 may also includepoint selection logic1034 to select imagelets from the selected point cloud1035 of prior sensor information as a basis of comparison with respect to thecurrent image data1043. Thepoint selection logic1034 can operate to reduce and/or optimize the number of points (e.g., imagelets) which are used with each selected point cloud1035 of prior sensor information, thereby reducing a number of image comparisons that are performed by thelocalization component1024 when determining the localization output1021. In one implementation, thepoint selection logic1034 implementspoint selection rules1055. Thepoint selection rules1055 can be based on contextual information, such as the weather, time of day, or season. Thepoint selection rules1055 can also be specific to the type of sensor data. For passive image data, thepoint selection rules1055 can exclude or de-prioritize imagelets which depict non-vertical surfaces, such as rooflines or horizontal surfaces, since precipitation, snow and debris can affect the appearance of such surfaces. Under the same weather conditions, thepoint selection rules1055 can also prioritize imagelets which depict vertical surfaces, such as walls, or signs, as such surfaces tend to preclude accumulation of snow or debris. Still further, thepoint selection rules1055 can exclude imagelets or portions thereof which depict white when weather conditions include presence of snow.
Likewise, when Lidar is used, thepoint selection rules1055 may select surfaces that minimize the effects of rain or snow, such as vertical surfaces. Additionally, thepoint selection rules1055 may avoid or under-weight surfaces which may be deemed reflective.
Thelocalization component1024 may useimage processing logic1038 to compareimage data1043 of thecurrent sensor state493 against the imagelets of the selected point cloud1035 in order to determine objects and features which can form the basis of geometric and spatial comparison. A perceived geometric and/or spatial differential may be determined between objects and/or object features ofimage data1043 and imagelets of the selected point cloud1035. The perceived differential may reflect a difference in the location of capture, as between sensor devices (e.g., on-board cameras of the vehicle10) used to capture thecurrent image data1043 and the imagelets of the point cloud1035, representing the prior sensor information. Similarly, the perceived geometric differential may reflect a difference in a geometric attribute (e.g., height, width, footprint or shape) of an object or feature that is depicted in thecurrent image data1043, as compared to the depiction of the object or feature with the corresponding imagelet of the point cloud1035.
Thelocalization component1024 may include geometric/spatial determination logic1036 to convert the perceived geometric and/or spatial differential into a real-world distance measurement, reflecting a separation distance between the location of capture of thecurrent image data1043 and the location of capture of imagelets of the selected point cloud1035. As an addition or variation, the geometric/spatial determination logic1036 may convert the perceived geometric and/or spatial differential into a real-world pose or orientation differential as between the image capturing devices of thecurrent image data1043 and sensor devices uses to capture the imagelets of the selected point cloud1035. In some examples, the geometric/spatial determination logic1036 manipulates the perceived object or feature of thecurrent image data1043 so that it matches the shape and/or position of the object or feature as depicted in imagelets of the selected point cloud1035. The manipulation may be used to obtain the values by which the perceived object or feature, as depicted by thecurrent image data1043, differs from the previously captured image of the object or feature. For example, the perceived object or feature which serves as the point of comparison with imagelets of the selected point cloud1035 may be warped (e.g., enlarged), so that the warped image of object or feature depicted by thecurrent image data1043 can overlay the image of the object or feature as depicted by the imagelets of the selected point cloud1035.
Theperception component1022 may determine aperception output1025 using thecurrent sensor state493 and prior sensor state1029. As described with examples, theperception output1025 may include (i) identification of image data corresponding to static objects detected from current image data, or (ii) identification of image data corresponding to non-static objects detected from current image data. In some examples, theperception output1025 may also include tracking information1013, indicating past and/or predicted movement of a non-static object.
In some examples, the prior sensor state1029 may include a static object feature set1037, which includes data sets captured previously which are deemed to depict static objects in the area of the road segment. The static objects include permanent objects which are not likely to change location or appearance over a duration of time. By way of example, the static objects may include objects which are deemed to have a height, shape, footprint, and/or visibility that is unchanged over a significant amount of time (e.g., months or years). Thus, for example, the static objects represented by the static object feature set1037 may include buildings and roadway structures (e.g., fences, overpasses, dividers etc.).
According to some examples, theperception component1022 uses the static object feature set1037 to identify portions of thecurrent image data1043 which reflect the presence of the static objects. In some examples, theperception component1022 may utilize theimage processing logic1038 to implement image recognition or detection of the static object feature depicted by thecurrent image data1043, using the identified static object feature set1037 provided by the prior sensor state1029. For example, the static object feature set1037 may specify semantic information (e.g., object classification, shape) about a static object, as well as a relative location of the static object by pixel location or image area. Theperception component1022 may use theimage processing component1038 to detect and classify objects in relative regions of the scene being analyzed, in order to determine if a semantically described static object is present in that image data corresponding to that portion of the scene. In this way, theperception component1022 may then uses theimage processing logic1038 to detect the static object from thecurrent image data1043, using the pixel location or image area identified by prior sensor state1029, as well as the object shape and/or classification.
Once the static objects are detected from thecurrent image data1043, theperception component1022 may then deduce other objects depicted by thecurrent image data1043 as being non-static or dynamic. Additionally, theperception component1022 may detect a non-static object as occluding a known static object when theimage analysis1038 determines that the pixel location/image area identified for the static object feature set1037 does not depict an object or feature of that set. When theperception component1022 determines static objects from thecurrent sensor state493, theperception component1022 may implementobject subtraction1026 so that the presence of the static object is ignored in connection with one or more sensor analysis processes which may be performed by thesensor processing subsystem1000. For example, thesensor processing subsystem1000 may subsequently perform event detection to track objects which are non-static or dynamic. When pixel data corresponding to static objects are ignored or removed, subsequent processes such as event detection and tracking may be improved in that such processes quickly focus image processing on non-static objects that are of interest.
In some examples, theperception component1022 may include trackinglogic1014 which operates to track non-static objects, once such objects are identified. For example, non-static objects may be sampled for position information over time (e.g., duration for less than a second). To optimize processing, theperception component1022 may ignore static objects, and focus only on the non-static object(s) during a sampling period. This enables theautonomous vehicle10 to reduce the amount of computation resources needed to track numerous objects which are encountered routinely when vehicles are operated. Moreover, the vehicle can optimize response time for when a tracked object is a potential collision hazard.
In some examples, thetracking logic1014 calculated a trajectory of the non-static object. The calculated trajectory can include predicted portions. In some examples, the trajectory can identify, for example, one or more likely paths of the non-static object. Alternatively, thetracking logic1014 may calculate a worst-case predictive trajectory for a non-static object. For example, thetracking logic1014 may calculate a linear path as between a current location of a tracked non-static object, and a path of the vehicle, in order to determine a time, orientation or velocity of the object for collision to occur. Thetracking logic1014 may perform the calculations and resample for the position of the non-static object to see re-evaluate whether the worst-case scenario may be fulfilled. In the context ofAV control system400, the perception output1025 (shown inFIG. 4 as perception423
As illustrated by examples ofFIG. 10,image analysis1038 includes operations which can be performed to determine localization output1021 and/orperception output1025. Theimage analysis1038, when applied to eitherlocalization component1024 orperception component1022, may include rules or logic to optimize or otherwise improve accuracy and/or ease of analysis. In some examples, theimage processing1038 includes warpinglogic1063, which includes rules or models to alter or skew dimensions of detected objects. In context ofperception component1022, a detected image provided bycurrent image data1043 may be enlarged or skewed in order to determine whether the object appears to match any of the static objects1037 which the prior sensor state1029 indicates should be present. In variations, the static objects1037 may be identified by semantic labels, and theimage processing component1038 can first warp a detected object from thecurrent image data1043, and then classify the detected object to determine if it matches the semantic label provided as the static object1037. In the context of localization, the warpinglogic1063 can warp detected objects in thecurrent image data1043 and/or prior sensor state1029 in order to determine if a match exists as to specific features or sub-features of the detected object.
Additional examples recognize that with respect to passive image sensor data, theimage analysis1038 may be negatively affected by lighting conditions, or environmental conditions which may impact the appearance of objects. Thus, for example, an outcome ofimage analysis1038 may affect the accuracy of efficiency of the geometric/spatial determination1036, in that lighting variations may features depicted by thecurrent image data1043 may be more or less likely to match with corresponding features depicted by the point cloud of imagelets, based on lighting factors.
According to some examples, thesensor processing subsystem1000 may include time and/orplace shift transformations1065 for use in comparing passive image data. Theshift transformations1065 may be applied by, for example, theimage processing logic1038, when image processing is performed in context of eitherlocalization component1024 orperception component1022. Eachtransformation1065 can represent a visual alteration to at least a portion of thecurrent image data1043 and/or prior image data. In some examples, the transformations can be quantitative variations that are applied globally to image data from a particular scene, or alternatively, to a portion of image data captured from a scene. The individual transformations can alter the appearance of passive image data sets (either current or prior sensor sets) with respect to attributes such as hue, brightness and/or contrast. Theimage processing1038 may apply the transformations selectively, when, for example, a disparity exists between current and past image sets with respect to hue, brightness, or contrast. Examples recognize that such disparity may be the result of, for example, variation in time of day (e.g., vehicle currently driving on road segment at night when sensor information was previously captured during day time hours), or change in season (e.g., vehicle currently driving on road segment during winter while sensor information was previously captured during summer). In the case of passive image data, the disparity in hue, brightness or contrast can impact the ability of theimage processing component1038 to accurately perform recognition, thus, for example, hindering the ability of the vehicle to perform localization or perception operations.
According to some examples, theimage processing component1038 may selectively use theshift transformations1065 to better match the current and past image data sets for purpose of comparison and recognition. For example, theimage processing component1038 may detect disparity in lighting condition between thecurrent image data1043 and the image data provided by the prior sensor state1029, independent of image recognition and/or analysis processes. When such disparity is detected, thesensor processing subsystem1000 may select a transformation, which can be applied similar to a filter, to accurately alter thecurrent image data1043 in a manner that best approximates visual attributes of the image data contained with the prior sensor state1029.
Examples also recognize that in a given geographic region, some roads or portions of the road network will have less sensor data sets as a result of being less traveled than other roads which may carry more traffic. Moreover, roads and road segments may provide substantial variation as to lighting parameters, given environmental factors such as presence of trees, buildings and street lights. To account for such variations, theshift transformations1065 can transform thecurrent image data1043 based on a categorization scheme, such as categories for tree coverage, building coverage, and poor street lighting. For a given road segment, in some implementations, thetransformations1065 may be selected for segments of roads based on road type (e.g., heavy trees, buildings, absence of street lights). In variations, thetransformations1065 may be based on prior sensor state data1029 of adjacent or nearby road segments, captured under environmental/lighting conditions that sufficiently match a current condition of thevehicle10 traveling along a less traveled road segment.
FIG. 11 illustrates an example of a vehicle on which an example ofFIG. 10 is implemented. Avehicle10 may operate autonomously, meaning the vehicle can drive on a route and navigate without human control or input. Thevehicle10 can implement, for example, theAV control system400, in order to autonomously navigate on a road network of a given geographic region. In an example ofFIG. 11, thevehicle10 is shown to traverse aroad segment1102, using a givendriving lane1103, and furthermore in presence of dynamic objects such asother vehicles1105 and people.
In an example ofFIG. 11, thevehicle10 implements thesensor processing subsystem1000 as part of theAV control system400 in order to determine thelocalization output121 and theperception output129. Thus, thesensor processing subsystem1000 can be implemented as part ofSIPS100, and further as part of theAV control system400 of theautonomous vehicle10. Thevehicle10 may include sensor devices, such as acamera set1112, shown as a rooftop camera mount, to capture images of a surrounding scene while the vehicle travels down theroad segment1102. Thecamera set1112 may include, for example, stereoscopic camera pairs to capture depth images of the road segment. In the example shown, thesensor processing subsystem1000 may be implemented using processors that are located in, for example, a trunk of thevehicle10.
In an example, thesensor processing subsystem1000 can select a point cloud ofimagelets1122 which represent the prior sensor state captured for theroad segment1102. Theimagelets1122 may include raw image data (e.g., pixel images), processed image data (e.g., feature vector of select objects), semantic labels, markers and image data of a particular region of the scene about a prior location ofcapture1115, where the prior location ofcapture1115 has a precisely known location. Thesensor processing subsystem1000 can take current sensor state data, including image data captured by thecamera set1112, and fan the current image data about two or three dimensions. In this way, the current image data can be structured or otherwise identified asimage segments1118, which can be compared to prior state imagelets1122 of the selected point. The comparison can calculate the difference between the current location ofcapture1125 for thecurrent image segments1118 the prior location ofcapture1115 for theprior imagelets1122. From the comparison, thevehicle10 can determine thelocalization output121,1021, including a localization coordinate1017 and pose1019, each of which may be made in reference to the prior location ofcapture1115.
FIG. 12 illustrates an example method for determining a location of a vehicle in motion using vehicle sensor data, according to an embodiment.FIG. 13 illustrates a method for determining a location of a vehicle in motion using image data captured by the vehicle, according to an embodiment.FIG. 14 illustrates a method for determining a location of a vehicle in motion using an image point cloud and image data captured by the vehicle, according to an embodiment.FIG. 15 illustrates an example method in which the perception output is used by a vehicle to process a scene. Example methods such as described withFIG. 12 throughFIG. 15 may be implemented using components and systems such as described with other examples. In particular, examples ofFIG. 12 throughFIG. 15 are described in context of being implemented by theAV control system400, which may include or implement thesensor processing subsystem1000. Accordingly, reference may be made to elements described with other figures for purpose of illustrating suitable components for performing a step or sub-step being described.
With reference to an example ofFIG. 12, a collection of submaps may be accessed by theAV control system400 of the vehicle in motion (1210). The collection of submaps may be locally accessed and/or retrieved over a network from a remote source (e.g., network service200). Each submap of the collection may include or be associated with prior sensor state data1029 corresponding to sensor data and/or sensor-based determinations of static features for a given road segment. The features may include static objects that may be visible to sensors of a vehicle (e.g., landmarks, structures in view of a vehicle on the roadway), roadway features, signage, and/or traffic lights and signs. The features may be stored as data sets that are associated or provided with a data structure of a corresponding submap. Each data set may, for example, include a sensor-based signature or feature vector representation of a portion of a scene, as viewed by a specific type of sensor set (e.g., stereoscopic camera, Lidar, radar, sonar, etc.). Furthermore, the stored sensor data sets may be associated with a reference location of the submap, such as the location of the vehicle when the prior sensor data sets were captured. One or multiple types of sensor data sets may be provided with the prior sensor state data1029 of the submap. For example, the prior sensor state1029 of a submap may include two-dimensional image data, stereoscopic image pair data, Lidar, depth image, radar and/or sonar, as captured from a particular location of a road network.
In some examples, the feature set associated with the collection of submaps are developed over time, using the sensor components (e.g., camera set1112) of the same vehicle1110 in prior passes of the road segment. In variations, the vehicle in1110 is part of a larger number of vehicles, each of which record sensor data and/or feature sets of the same sensor type(s). As described with an example ofFIG. 2, thesubmap network service200 may collect and process the sensor data from individual vehicles of a fleet, and then share the submaps with updated feature sets with other vehicles of the fleet.
At an initial time, theAV control system400 may determine which submap in a collection of submaps is for the particular road segment on which the vehicle1110 is operated (1220). The determination may be made when, for example, the vehicle is started, switched into autonomous mode, or when the vehicle resets or re-determines its position for a particular reason. In some examples, theAV control system400 may approximate the current location of the vehicle1110 using a satellite navigation component and/or historical information (e.g., information from a prior trip).
TheAV control system400 performs localization by determining a location of the vehicle within the determined submap. In particular, the localization may be performed by comparing current sensor data (e.g., current image data1043) to a previously determined sensor representation (e.g., sensor state1029) of the region surrounding the current road segment, where each of the current sensor data and the prior sensor data are associated with a particular location of capture (1230). In an implementation, the selected submap may include or otherwise identify the prior sensor information, as well as provide a prior location of capture within the submap, and theAV control system400 may compare current and past sensor information in order to determine an accurate and highly granular (e.g., within 1 foot) location of the vehicle within the submap. In some examples, the determined location may be relative to a boundary or reference location of the submap, and further may be based on a known location of capture for the prior sensor information.
In some examples, the submap carries additional determinations pertaining to the prior sensor state1029, such as the distance of the vehicle from the sidewalk. The additional determinations may provide further context and mapping with respect to understanding the current location of the vehicle in the submap. For example, the determined location may be highly granular, specifying, for example, the lane the vehicle1110 occupies, and a distance of the vehicle from, for example, an edge of the roadway. Still further, in some examples, the location of thevehicle10 may be specific to, for example, a width of a tire.
According to some examples, theAV control system400 compares features of the scene surrounding the vehicle1110, as provided by current sensor data, to features of the prior sensor state1029, in order to determine a depicted spatial or geometric differential between the feature as depicted by the current sensor data and the same feature as depicted by the prior sensor state (1232). For example, theimage processing component1038 may recognize a given feature from current image data (e.g., Lidar image, sonar image, depth image, etc.) but the given feature as recognized from the current image data may vary in dimension (e.g., shape, footprint), spacing (e.g., relative to another object), and/or orientation as compared to the depiction of the feature with the prior sensor state data1029 of the submap. The identified differential between the respective depictions may be correlative to a spatial difference between the current location of capture for the vehicle1110 and the prior location of capture associated with the sensor information of the submap. The determined spatial difference may identify a precise location of the vehicle within the area of the submap (1234).
As described with examples ofFIG. 13 andFIG. 14, the feature set of the current submap may be implemented in the form of a point cloud structure of sensor data sets, where individual features of the current submap are associated with a precise location. The current sensor data set may be analyzed to determine sensor data sets which match to point cloud elements of the point cloud. Based on the comparison, the location of the vehicle may be determined in reference to the precise location of point cloud elements which form the basis of the comparison.
InFIG. 13, theAV control system400 employs passive image sensors in connection with prior sensor state information that is structured in the form of a point cloud. A set of current sensor data may be captured by passive image sensor devices of thevehicle10 instance when the vehicle traverses a given area of the road network (1310). The passive image sensor devices may correspond to, for example, one or more pairs of stereoscopic cameras of anautonomous vehicle10.
TheAV control system400 may match a subset of the passive image data to one or more features of a known feature set for an area of the road network (1320). The known feature sets may be in the form of image-based sensor data sets, such as feature vectors or image signatures of one or more static objects which are known to be visible in the area of the vehicle's location. The known features, depicted with the image-based sensor data sets, may be associated with a precise location. While some examples such as described with an example ofFIG. 11 utilize submap data structures to carry feature sets which provide a basis for comparison to current sensor state information, in variations, other data structure environments may be used by the vehicle to maintain and use features for comparing passive image data to corresponding image reference data. For example, the known features may be associated with a precise distance and orientation with respect to a roadway landmark (e.g., the end of an intersection). In examples in which submaps are used, the known features may be associated with a precise location within the submap.
TheAV control system400 may determine the location of the vehicle within the given area based on the comparison of thecurrent sensor state493, which may be in the form of passive image data, and the known features which are structured in a point cloud and associated with a known reference location (1330). In one implementation, aspects of features detected from thecurrent sensor state493 are compared to corresponding aspects of the matched and known feature set. One or more variations are determined with respect to dimension and pose, as between the aspects of the features provided in the current sensor data and corresponding aspects of the matched feature set (1332). The variations may be converted into position and pose variation with respect to the reference location of the reference images.
With regard toFIG. 14, theAV control system400 may access multiple sets of reference imagelets (e.g., a plurality of point clouds) which depict known features of the area of the road network for the vehicle's current location (1410). Each of the reference imagelet sets may depict a feature that is associated with a reference location, identifying, for example, a location of a camera where the set of imagelets were previously captured.
While the vehicle traverses a road segment, theAV control system400 obtains current image data from one or more camera devices of the vehicle (1420). As the vehicle traverses the road network, theAV control system400 may also associate an approximate location with the current image data. The approximate location may be determined by, for example, a satellite navigation component and/or historical information which tracks or records the location of the vehicle.
For the given location, theAV control system400 selects one of the multiple sets of reference imagelets, based at least in part on the approximate location of the vehicle (1430). As an addition or variation, the vehicle may have alternative point cloud representations to select from for a given reference location, with the alternative point cloud representations representing alternative lighting conditions (e.g., seasonal, from weather, time of day, etc.).
Additionally, theAV control system400 makes a determination that an object depicted by the current image data matches to a feature depicted by one of the imagelets of the matching set (1440). For example, individual imagelets of the selected reference imagelet set may be compared to portions of the current image data in order to determine the presence of a matching object or object feature. In some examples, theAV control system400 may utilize rules, models or other logic to optimize the use of point cloud imagelets for purpose of determining location. In one aspect, a set of selection rules may be utilized to identify imagelets of the known feature set to either use or ignore when performing the comparison to the current sensor state. The selection rules may be based in part on context, such as time of day, weather condition, and/or lighting conditions. For example, the selection rules may disregard imagelets that depict vertical surfaces when there is snow.
In making the determination, theAV control system400 applies a transformation on one of the current image data or the reference imagelets of the selected set (1442). TheAV control system400 may apply the transformation based on a determination that a variation of a lighting condition is present as between the area of the road network when the reference set of imagelets were captured and when the current image data is captured (1444). The condition may be of a type which affects an appearance of objects and/or the surrounding area. In some examples, the condition may be one that affects a lighting of the area surrounding the road network, such as the time of day (e.g., variation in image as captured during daytime, dusk or evening) or weather (e.g., cloudless sky versus heavy inclement weather).
According to some examples, the transformation may be determined from a criteria or model that is trained using sensor data previously collected from the current vehicle and/or one or more other vehicles at the approximate location (1446). In variations, the transformation may be determined from a model that is trained using sensor data previously collected from the current vehicle and/or one or more other vehicles at a neighboring location (1448). The neighboring location may be, for example, on the same road (e.g., on an alternative region of the road where more sensor data exists), on an adjacent road (e.g., on a neighboring road where the condition is deemed to have a similar affect), or on a same type of road (e.g., road darkened by trees).
TheAV control system400 determines a highly granular location of thevehicle10 based at least in part on the reference location associated with the feature depicted by the current image data (1450). In some examples, a dimensional or geometric aspect of the object is compared to a corresponding dimension or geometric aspect of the object depicted by the reference imagelets in order to determine one or more visual differentials (1452). In variations, the object depicted by the current image data is altered or warped to provide the differential in dimension, orientation or other geometric characteristic until the object depicted by the current image data matches that of the reference image (1454). The differential(s) may be mapped or translated into a difference in distance and orientation with respect to the reference location of the reference imagelet where the corresponding feature is depicted.
With reference toFIG. 15, a vehicle is autonomously operated to travel across the road segment using, for example,AV control system400, including theAV control system400. The vehicle1110 may obtain current sensor data as the vehicle traverses a road segment (1510). The current sensor data may include image data, such as captured by two-dimensional cameras, or by pairs of stereoscopic cameras that capture three-dimensional images of a corresponding scene. In variations, the sensor data may include Lidar or radar images.
In traversing the road segment, the vehicle1110 may access stored sensor data which identifies a set of static objects (1520). The set of static objects are identified by the vehicle based on vehicle location. In some implementations, the vehicle may have a granular awareness of its own location, using for example, a satellite navigation component, or a general determination made from historical data or from a particular submap in use.
In variations, the stored sensor data that identifies the static objects may reside with a submap that identifies the precise location of each static object relative to the submap. For example, the vehicle may utilize the stored submaps to concurrently perform localization so that the vehicle's current location within the road segment is known. Additionally, the submap may identify the location of static objects in relation to a reference frame of the submap. In such examples, the vehicle may facilitate its ability to identify stored data that depicts those static objects which are most likely present and depicted by thecurrent sensor state493 of the vehicle1110.
The vehicle1110 may determine one or more non-static (or dynamic) objects as being present in a vicinity of the vehicle based on the current sensor data and the stored sensor data (1530).
In determining one or more non-static objects as being present, theAV control system400 may reduce a quantity of sensor data that is processed based on portions of the stored sensor data that are deemed to depict one or more of the static objects (1532). For example, theAV control system400 may subtract portions of the current image data which are deemed to depict any one or more of the set of static objects. In determining portions of the current sensor data which depict static objects, theAV control system400 may implement image analyses in order to recognize or detect the static objects of the stored sensor data which are likely depicted by the current sensor data.
According to some examples, once theAV control system400 determines that a non-static object is present in a vicinity of the vehicle, theAV control system400 tracks the non-static object as the vehicle progresses on the road segment (1540). Depending on implementation, the vehicle may track the non-static object using any one or combination of sensors, including cameras, Lidar, radar or sonar.
In some examples, theAV control system400 may track the non-static object without tracking any of the determined static objects (1542). For example, theAV control system400 may identify portions of an overall pixel map which are likely to depict static objects. TheAV control system400 may then ignore portions of the current image data which map to the identified portions of the pixel map which depict static objects.
TheAV control system400 may track the non-static object by determining a trajectory of the object (1544). The trajectory determination can include sampling for the position of the non-static object for a short duration of time, while ignoring the static objects. The trajectory determination may include a predicted trajectory of the object (1546), based on probability or worst-case scenario.
It is contemplated for embodiments described herein to extend to individual elements and concepts described herein, independently of other concepts, ideas or system, as well as for embodiments to include combinations of elements recited anywhere in this application. Although embodiments are described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments. As such, many modifications and variations will be apparent to practitioners skilled in this art. Accordingly, it is intended that the scope of the invention be defined by the following claims and their equivalents. Furthermore, it is contemplated that a particular feature described either individually or as part of an embodiment can be combined with other individually described features, or parts of other embodiments, even if the other features and embodiments make no mentioned of the particular feature. Thus, the absence of describing combinations should not preclude the inventor from claiming rights to such combinations.