CLAIM OF PRIORITY BACKGROUND- This application claims the benefit of priority under 35 U.S.C. § 119(e) to U.S. Provisional Pat. Application Serial No. 63/333,682, filed on Apr. 22, 2022, which is incorporated by reference herein in its entirety. 
BACKGROUND- Road biking in the presence of cars can pose significant dangers to cyclists. The high speed and weight of cars create a serious risk of injury or death in the event of an accident. According to the National Highway Traffic Safety Administration (NHTSA), in 2020, 846 bicyclists were killed in traffic crashes in the United States, with cars being a major contributor. Additionally, a total of 48,000 bicyclists were injured in motor vehicle crashes in the same year. 
- Road biking in the presence of cars can be an extremely hazardous activity, with numerous dangers and risks to cyclists. Cars are significantly larger, faster, and more powerful than bicycles, making them a major threat to the safety of cyclists. When cars and bicycles share the road, there is an elevated risk of accidents, collisions, and injuries. 
- While there are existing solutions to improve road bike safety, such as lights, reflectors, helmets, and high-visibility clothing, these solutions have technical limitations that may reduce their effectiveness. For example, while lights and reflectors can improve visibility, they may not provide proactive hazard avoidance features such as collision detection. Additionally, helmets may reduce the severity of injuries in the event of an accident, but cannot prevent accidents from occurring in the first place. As a result, there is a pressing need for more effective and advanced safety solutions that can better mitigate the risks associated with road biking in traffic conditions. 
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS- Certain figures are included in the text below and others are attached. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. 
- FIG.1 is a diagrammatic representation of a traffic environment, according to some examples, within which a mobility safety system may be deployed, as part of a broader traffic safety system 
- FIG.2 is a diagrammatic representation of a mobility safety system, according to some examples. 
- FIG.3 is side-perspective view of the mobility safety system, showing the enclosure having mounted and secured therein an audio generation device in the form of a speaker and a light generation device in the form a Light Emitting Diode. 
- FIG.4 is a compute graph representation of a software architecture of a mobility safety application, according to some examples, which executes on the mobility safety system. 
- FIG.5 illustrates how a localization and mapping component uses raw sensor data to continuously refine estimates of the ego-motion of the user, as well as to use and refine a map of the traffic environment, according to some examples. 
- FIG.6 is a diagrammatic representation of operations and components of the perception system, according to some examples. 
- FIG.7 is a diagrammatic representation of operations and components of the prediction component, according to some examples. 
- FIG.8 is a diagrammatic representation of operations and components of the risk estimation component, according to some example 
- FIG.9 is a flowchart illustrating a method, according to some examples, of operating to a mobility safety system to provide alerts to actors within a traffic environment. 
- FIG.10 is a flowchart illustrating the operations that may be performed by the mobility safety system in order to generate a risk estimate, according to some examples. 
- FIG.11 illustrates an aspect of the subject matter, according to some examples. 
- FIG.12 is a block diagram showing a software architecture within which the present disclosure may be implemented, according to some examples. 
- FIG.13 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, in accordance with some examples. 
- FIG.14 is a flowchart depicting a machine-learning pipeline, according to some examples, which may be used to train one or more machine learning models used in or with mobility safety systems. 
- FIG.15 illustrates training and use of a machine-learning program, according to some examples. 
DETAILED DESCRIPTION- Examples of artificially intelligent mobility safety systems are described, which use artificial intelligence to enhance the safety of bicyclists and other road users. 
- In some examples, the safety system comprises a smart bike light that incorporates artificial intelligence (AI) algorithms. Example systems seek to prevent traffic incidents by providing early warning signals and alerts to users, based on a combination of hardware and sensors, AI algorithms, and alert mechanisms. 
- Hardware components include several sensors and communication modules, such as light sensors and proximity sensors, which are capable of detecting and analyzing data from the surrounding environment. The data collected by the sensors is analyzed by an advanced AI algorithm, which incorporates machine learning techniques to predict potential hazards and alert users accordingly. 
- Example systems also include an alert mechanism (or mechanisms) that provides users with warnings in the event of potential hazards. The alert mechanism may comprise a visual or audible alert, or a combination of both, depending on the situation. Additionally, the system is designed to provide users with an intuitive interface that enables them to quickly and easily access critical information about their surroundings. 
- In operation, example safety systems are capable of identifying potential hazards and providing users with advance warning signals to prevent accidents and injuries. The system’s AI algorithms enable it to adapt to changing traffic conditions and provide real-time alerts to users. 
- In summary, the described examples provide an innovative and technologically advanced mobility safety system that utilizes artificial intelligence to enhance the safety of bicyclists and other road users. The system’s unique combination of hardware and software components, along with an intuitive user interface, provides a powerful and effective tool for preventing accidents and improving road safety. 
- FIG.1 is a diagrammatic representation of atraffic environment102, according to some examples, within which amobility safety system104 may be deployed, as part of a broader traffic safety system. Thetraffic environment102 includes a number of objects and features, the objects including stationary object (e.g., traffic lights, lamp posts, curbs, speed bumps etc.) and mobile objects, which include actors within thetraffic environment102. 
- As examples of stationary objects and features, thetraffic environment102 is shown to include infrastructure including intersectingroads106. A collection oftraffic lights108 operate to control traffic flow at the intersection between theroads106. Thetraffic environment102 also includes abike lane110 that runs parallel to one of theroads106. 
- As examples of actors within thetraffic environment102, a number of vehicles are shown to be traversing theroads106 andbike lanes110 within thetraffic environment102, thesevehicles including cars112, buses, trucks etc., in addition to a number of mobility platforms (e.g., micromobility platforms or devices, such asbicycles114,scooters116,motorbikes118, skateboards, etc.) Other examples of actors may include pedestrians and animals. 
- With specific regard to thebicycle114, this mobility platform has an associatedmobility safety system104, which may be attached to thebicycle114 itself, a bicycle accessory (e.g., a helmet worn by the cyclist) or to the body of the cyclist. Similarmobility safety systems104 may be associated with the other illustrated mobility platforms, as well as the other vehicles. 
- In some examples, themobility safety systems104 within thetraffic environment102 may be communicatively coupled using a vehicle-to-vehicle (V2V) communication protocol. V2V communication can enablemobility safety systems104 to exchange data and information with other vehicles and mobility platforms, allowing them to communicate with each other and exchange data. 
- For example, themobility safety systems104 may use wireless communication protocols, such as Wi-Fi, Bluetooth, or ZigBee, to exchange data and information with othermobility safety systems104 and vehicles in thetraffic environment102. The V2V communication can also use Dedicated Short Range Communication (DSRC) technology, which is a wireless communication standard designed specifically for V2V communication in intelligent transportation systems. DSRC uses a 5.9 GHz frequency band to exchange safety-critical information between vehicles and other devices in the traffic environment. 
- In addition to V2V communication, thetraffic lights108 in thetraffic environment102 may also have components, including various sensors such as cameras, LiDAR, radar, and ultrasonic sensors, to collect real-time traffic information and communicate this information to themobility safety systems104. For example, thetraffic lights108 may use cameras to capture images of the traffic at the intersection, and themobility safety systems104 may use this information to generate alerts and take other actions, as will be described in further detail below. 
1. Device Description- FIG.2 is a diagrammatic representation of amobility safety system104, according to some examples. 
- Themobility safety system104 is a fully self-contained device including: 
- Sensors202: exteroceptive (e.g., a camera204) and interoceptive (e.g.,IMU206, GPS208).
- Compute modules: embedded computer210 (e.g., onboard processors and memory) as well as an (Artificial Intelligence)AI accelerator chip212.
- Algorithms: localization, perception, prediction, and risk estimation, coupled with powerful machine-learned (ML) models, stored and executed by the embeddedcomputer210.
- Alert mechanisms214: both auditory and visual means of alerting users.
- Battery and battery monitoring systems216: capable of powering the device through typical usage.
- Themobility safety system104 uses thesensors202 and algorithms to provide temporally relevant warnings to traffic participants. The examplemobility safety system104 is described as a safety device mounted directly on abicycle114, rather than on a full-size vehicle. Themobility safety system104 is readily attached to bicycles, scooters, and other mobility platforms and accessories (such as on a user’s helmet or worn over clothing). While examplemobility safety systems104 is a bicycle use case is described, but it will be appreciated that the broad applicability toward all forms of mobility and personal transport. 
- Themobility safety system104 has a number of modes. In a first mode, themobility safety system104 operates as a bike light, providing extra daytime and nighttime illumination for a bicyclist on the street. In further modes of operation, however, themobility safety system104 usessensors202 and algorithms to analyze its surrounding environment, identify potential hazards and trigger customized warnings based upon specific conditions. 
2. Hardware Systems- Example hardware of themobility safety system104 includes multiple subsystems allowing self-supported operation, as a single device mounted on a bicycle. 
- The examplemobility safety system104 consists ofnumerous sensors202 and computemodules218, powered by an onboard battery andbattery monitoring system216 and protected by aphysical enclosure220, and is able to drive various alert mechanisms214 (including audibleaudio alerts222 and visual audio alerts222). In addition to audible and visual alerts, themobility safety system104 may also provide haptic feedback, such as vibrations, to alert users of potential hazards. 
2.1 Physical Enclosure- Thephysical enclosure220 of themobility safety system104 protects thesensors202, computemodules218, battery andbattery monitoring system216 from environmental factors such as dust, debris, and moisture. Theenclosure220 may be made from lightweight and durable materials such as aluminum, polycarbonate, or ABS plastic, which are able to withstand the rigors of daily use. 
- Theenclosure220 is designed to have multiple mounting options, allowing users to customize the location of the device on their mobility platform, such as thebicycle114. While the nominal mounting locations are the seat post or handlebars of abicycle114, other mounting options can be used as well, depending on the user’s preference and the specific requirements of the mobility platform. 
- Access ports on thephysical enclosure220, such as an environmentally sealed USB-C port, are included to allow users to charge the batteries and perform other necessary functions. These ports may also be used to provide data connectivity, allowing users to connect themobility safety system104 to other transport buses besides bicycles, such as e-scooters, hoverboards, or electric skateboards. In addition to the USB-C port, theenclosure220 could also include other types of ports, such as Ethernet, HDMI, or audio jacks, which could be used for a range of applications, such as data transfer, video output, or audio input/output. 
- Theenclosure220 may further include other features, such as a status indicator light, which provides visual feedback on the device’s operational status, or a physical button, which could be used to trigger certain functions or settings. Theenclosure220 could also include additional protection features, such as shock absorption or waterproofing, to ensure the safety and durability of the device in different environments. 
2.2 Battery and Power System- The battery andbattery monitoring system216 provides the necessary power to drive thesensors202, computemodules218, and alert mechanisms, and ensures that themobility safety system104 is able to operate safely and effectively for extended periods of time. 
- Themobility safety system104 is powered using a rechargeable battery, such as Lithium-Ion (Li-Ion) or Lithium Polymer (Li-Po) battery cells, which are located within theenclosure220. These batteries offer high energy density, and can thus store more energy in a small form factor. 
- Themobility safety system104 can be powered via an external bus voltage, such as supplied via a USB-C connector. This feature may be useful if the battery is depleted or if the user needs to charge the battery while the device is in use. 
- The battery andbattery monitoring system216 is designed to ensure the safety and reliability of the onboard battery system. The battery andbattery monitoring system216 further includes a battery management system (BMS), which is responsible for charging the battery, monitoring current and voltage levels, and preventing over- and under-voltage conditions as necessary. The BMS ensures that the battery is charged optimally and that its performance is maintained over time. The BMS constantly monitors the battery’s state of charge, temperature, and voltage, and adjusts the charging rate and voltage as necessary to ensure that the battery is charged safely and efficiently. In addition, the BMS can detect and prevent overcharging or undercharging of the battery, which can cause damage to the battery or reduce its performance over time. 
- The battery andbattery monitoring system216 can include additional functionality to enhance the performance and safety of themobility safety system104. For example, a battery pack enables the batteries to be easily replaceable, allowing users to swap out the battery pack when needed. This can be particularly useful if the device is being used for extended periods of time or if the battery is depleted and needs to be replaced quickly. 
- The battery andbattery monitoring system216 may further include battery level indicators, which provide users with real-time feedback on the battery’s state of charge, and low battery alerts, which notify users when the battery is running low and needs to be recharged or replaced. Additionally, the battery andbattery monitoring system216 may support fast charging or wireless charging, which can help to reduce the amount of time required to charge themobility safety system104 and increase its overall convenience and usability. 
2.3 Sensors- In some examples, themobility safety system104 uses one ormore cameras204, inertial measurement units (IMUs206), and global positioning systems (GPSes208) as a set of sensors. Further example configurations may include additional sets of sensors (e.g., multiple cameras) and different sensor modalities (e.g., the addition of range sensors such as LiDAR, radar, and ultrasonic sensors). 
Camera204- Some examples of themobility safety system104 may include asingle camera204 orcameras204, which may be incorporated into themobility safety system104 to improve the Field of View (FOV) or to allow an expanded operational domain. 
- Each of thecameras204 may be a CMOS (Complementary Metal-Oxide-Semiconductor) camera. CMOS cameras offer high-speed, low-power operation and good image quality and sensitivity in low-light conditions. Another type of camera that may be used is a CCD (Charge-Coupled Device) camera. CCD cameras are known for their high image quality and low noise levels. 
- Operational parameters of thecameras204 can also be customized based on the specific requirements of themobility safety system104. For example, the frame rate can be increased or decreased depending on the amount of visual data that needs to be captured. As noted above, a high frame rate of 30-60 Hz may be used. The resolution of thecameras204 can also be customized. A high-resolution camera with 1920 x 1080 or more pixels per video frame can be used to capture detailed visual data of the traffic environment. This can be useful in detecting and tracking small objects or features. In order to prevent motion blur when themobility safety system104 is in motion, a fast shutter speed of 5 to 15 msec can be used. This ensures that the visual data captured by thecamera204 is clear and usable for analysis. 
- A wide field of view lens can also be used to allow thecameras204 116 to observe a large enough area around themobility safety system104 in order to capture and visually track multiple traffic participants. This can be useful in detecting and tracking multiple objects or participants in the environment. 
Inertial Measurement Unit (IMU206)- Themobility safety system104 uses several sensors to estimate the ego-motion of themobility safety system104 and an ego-vehicle (e.g., the bicycle114) to which themobility safety system104 is attached to. These sensors include a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer. 
- The 3-axis accelerometer estimates the effect of accelerations on themobility safety system104, including gravity when the device is stationary, thus providing data on the orientation of the device. 
- The 3-axis gyroscope estimates the angular rates of themobility safety system104 in motion. These estimates may be used to determine the rotation of themobility safety system104 and thebicycle114 to which it is mounted. The gyroscope can also be used to estimate the orientation of themobility safety system104 in the absence of other sensors. 
- The 3-axis magnetometer provides estimates of nearby magnetic fields, pointing to magnetic north when not in the presence of any other magnetic disturbances. This magnetometer thus provides data on the orientation of the device relative to the Earth’s magnetic field. 
- An alternative or supplemental accelerometer may be a piezoelectric accelerometer. This type of accelerometer uses piezoelectric materials to measure changes in acceleration. 
- An alternative or supplemental gyroscope is the fiber optic gyroscope. This type of gyroscope uses the interference of light beams in a fiber optic coil to measure angular velocity 
- In addition to the magnetometer, other sensors can be used to provide magnetic field measurements. 
Global Positioning System (GPS208)- To provide localization, themobility safety system104 uses a global positioning system. Themobility safety system104 receives data from satellites orbiting the Earth, triangulates its position on Earth based upon this data, and provides this raw localization information to the software systems. 
2.4 Onboard Compute (compute Modules218)- Examplemobility safety systems104 use embedded systems to fully support compute requirements, consisting of an embeddedcomputer210, capable of running a full operating system and application layer, along with anAI accelerator chip212, to rapidly process data using ML systems. 
EmbeddedComputer210- An embeddedcomputer210, in some examples consisting of either a single-board computer (SBC) or a System-on-Module (SoM) attached to the carrier board, provides computational capabilities. An SBC may be a complete computer built on a single circuit board, with microprocessors, memory, input/output interfaces, and other components required for its operation. A SoM may include at least microprocessor or microcontroller, Random Access Memory (RAM), flash memory, and input/output interfaces such as General-Purpose Input/Output (GPIO), Universal Asynchronous Receiver-Transmitter (UART), Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), Universal Serial Bus (USB), Ethernet, and Wireless Fidelity (Wi-Fi).. A SoMs may also include additional components such as sensors, audio and video codecs, power management ICs, and security features. A SoMs may also be mounted on a carrier board, which provides additional hardware and interfaces. 
- An operating system, loaded with software to operate themobility safety system104, enables functionality such as described below. Interfaces between the embeddedcomputer210 andsensors202 retrieve sensor data, and data storage components allow sensor data (and its derivatives) to be stored for future use. Onboard wireless communications, via Wi-Fi, Bluetooth, and LoRa, allow the computer to communicate with devices external to themobility safety system104. 
AI Accelerator Chip212- In addition to the embeddedcomputer210, an Artificial IntelligenceAI accelerator chip212 is used to perform fast processing of data to make inferences using ML models.AI accelerator chips212 are customized to perform mathematical operations and provide significant benefits in terms of computation per watt of supplied power, thus improving the overall power efficiency of themobility safety system104 when running off of battery power. 
- The embeddedcomputer210 andAI accelerator chip212 of themobility safety system104 operate together to share the processing of data, with the embeddedcomputer210 handling the general-purpose computing tasks and theAI accelerator chip212 handling specialized machine learning tasks. 
- The embeddedcomputer210 includes a Central Processing Unit (CPU), memory, and various input/output interfaces, such as Universal Serial Bus (USB), Ethernet, and General Purpose Input/Output (GPIO) pins. The embeddedcomputer210 is responsible for running the operating system and software stack for themobility safety system104. This software stack includes drivers forsensors202 and other input/output devices, as well as algorithms for processing sensor data and making decisions based on that data. 
- TheAI accelerator chip212 is responsible for accelerating machine learning algorithms, and can perform certain operations faster and with less power consumption than the CPU in the embeddedcomputer210. TheAI accelerator chip212 may have multiple cores optimized for matrix multiplication and other mathematical operations used in machine learning, as well as specialized memory and interconnects to facilitate efficient data transfer. 
- In themobility safety system104, the embeddedcomputer210 andAI accelerator chip212 can work together in a few different ways. In some examples, the embeddedcomputer210 preprocess sensor data and prepares this data for input to theAI accelerator chip212. This may include scaling and normalizing the data, or converting it to a different format that is more amenable to the machine learning algorithms running on theAI accelerator chip212. 
- Once the data is prepared, it can be passed to theAI accelerator chip212 for processing. TheAI accelerator chip212 then performs the machine learning computations required for tasks such as object detection, and estimation, and returns the results to the embeddedcomputer210 for further processing or decision-making. 
- In some examples, the embeddedcomputer210 andAI accelerator chip212 may operate in a pipelined fashion, where the embeddedcomputer210 handles some initial processing of the sensor data, and then passes it to theAI accelerator chip212 for further processing. This approach can help to reduce the overall latency of the system, as theAI accelerator chip212 can perform certain computations much faster than the CPU in the embeddedcomputer210. 
- The use of an embeddedcomputer210 andAI accelerator chip212 together in themobility safety system104 allows for a flexible and efficient architecture that can balance general-purpose computing with specialized machine learning tasks, and may provide a high-performance, low-power solution for processing sensor data and making decisions in real time. 
3. Alert Mechanisms- In some examples, themobility safety system104 provides customized alerts to traffic participants. These alerts include both audible and visual methods to seek the attention of nearby road users. 
- FIG.3 is side-perspective view of themobility safety system104, showing theenclosure220 having mounted and secured therein an audio generation device in the form of aspeaker302 and a light generation device in the form a Light Emitting Diode (e.g., LED304). TheLED304 provides both a wide field of view coverage as well as a brighter, directed beam in a specific direction, turns on and off at a desired pattern and frequency. Onboard software modulates the intensity, pattern, and frequency of light emitted from theLED304 based upon specific conditions, for instance, shining brighter or changing more rapidly to raise alarm when a nearby traffic actor approaches. 
- Audible signals generated by thespeaker302 provide an additional ability to alert both the bicyclist and nearby actors in specific scenarios. A first mode of operation provides loud alerts (e.g., >= 80 dB at 1 meter of distance) that can be heard by both the bicyclist and other actors. A second mode of operation provides critical audio alerts (e.g., >= 100 dB at 1 meter of distance), reserved for the most dangerous scenarios and to ensure vehicle operators can hear the alerts from inside of a closed vehicle. Similar to the light-generating device, the audio generating device may be directed in specific directions based on mount location. For amobility safety system104 withsensors202 looking backward at approaching vehicles, the audio-generating device is similarly pointed backward. 
- In the second mode of operation, the audio generating device in themobility safety system104 is able to alert drivers in closed vehicles by providing audio alerts that are louder and more attention-grabbing. In some examples, the audio generating device may be a horn or a siren that is capable of generating high-decibel sounds that can be heard over the noise inside a closed vehicle. 
- For example, a horn may be mounted on themobility safety system104, The horn may generate a sound at a dB range of around 112-130 dB, which is loud enough to be heard inside a closed vehicle. The horn could be directed in a specific direction based on the mount location, such as backward to alert approaching vehicles. 
- In some examples, a siren may be used to generate a distinct, attention-grabbing sound. A siren can be louder than a horn, with a typical range of 120-140 dB. The siren may also be directed in a specific direction based on the mount location, such as backward to alert approaching vehicles. In addition to speakers, horns, and sirens, other audio generating devices that may be used for this purpose include buzzers, beepers, and alarms. These devices can generate loud, attention-grabbing sounds that can alert drivers in closed vehicles. 
4. Software Architecture- A software architecture of themobility safety system104, according to some examples, is defined as a compute graph for purposes of processing sensor information and identifying scenarios, built on top of software infrastructure with underlying capabilities and interfaces to hardware. 
- FIG.4 is a compute graph representation of a software architecture of amobility safety application402, according to some examples, which executes on themobility safety system104. Each architecture component takes as input a subset of upstream components in order to produce data streams for downstream components.FIG.4 illustrates several example algorithmic components, namely a localization andmapping component404, aperception system406, aprediction component408, arisk estimation component410, andalert activation component412. The localization andmapping component404 and theperception system406 are shown to receiveraw sensor data414 from thevarious sensors202 described with reference toFIG.2. These application layer components are described below, followed by details on a Machine Learning approach and the overall software infrastructure upon which themobility safety system104 is built. 
4.1. Localization and Mapping- FIG.5 illustrates how the localization andmapping component404 usesraw sensor data414 to continuously refine estimates of the ego-motion of the user, as well as to use and refine a map of thetraffic environment102. The localization andmapping component404 outputs include a fully refined estimate of the ego-vehicle position and orientation (and derivatives), and a set of map elements, identifying nearby objects in the environment (e.g., the traffic environment102). 
- The localization andmapping component404 provides a localization capability to themobility safety system104 to estimate its location (e.g., position, orientation) with respect to a fixed or relative coordinate system in thetraffic environment102, including estimation of derivatives of these quantities (e.g., velocity, angular rates, as well as linear and angular accelerations). 
- The localization andmapping component404 also provides a capability to themobility safety system104 to understand its environment (e.g., the traffic environment102) well enough to develop and refine an estimate of other fixed objects in the world. Localization and mapping by the localization andmapping component404 can be performed simultaneously, or can be built on top of each other. In some examples, themobility safety system104 device has onboard maps, downloaded from a remote repository and cached in the storage mechanism on thecompute modules218, which can then be used to improve localization capabilities. 
- Localization, according to some examples, uses output from several components, examples of which include: 
- Sensors202: Thesensors202 capture and output theraw sensor data414, which may be stored within the memory of thecompute modules218 or elsewhere. TheGPS208 provides initialization for the localization process by outputtingGPS data502, while theIMU206 providesIMU data504, including real-time readings on how themobility safety system104 is moving over time, as measured by accelerometers and gyroscopes.
- Pose estimation system506: Inertial measurements (e.g., linear accelerations, angular rates, and magnetometer readings) are fed to an iterative estimation method (e.g., a Kalman Filter or similar) to provide real-time estimates of the position and orientation of themobility safety system104, as well as derivatives of these quantities.
- Mobility platform model component508: In cases where themobility safety system104 is rigidly attached to a mobility platform (e.g., bicycle, scooter, or similar), a motion model of the mobility platform is further used as input to thepose estimation system506. In some examples are as discussed below, the mobility platform is referred to as the ego-vehicle or a mobility device.
- Visual Odometry components510: Odometry based upon exteroceptive sensors (e.g.,cameras204 or other exteroceptive sensors) can be further used as input to thepose estimation system506.
- The localization andmapping component404 of themobility safety system104 helps determine street surfaces and the surrounding environment, for example by providing estimates of ground surface geometry, road markings, and curb/sidewalk locations (described in more detail with respect to theperception system406 with reference toFIG.6). These quantities are stored in anenvironmental map512 and further used to improve localization. An existing map can be used to refine the localization andmapping component404 and localized positions can be used to refine theenvironmental map512. 
4.2. Perception- FIG.6 is a diagrammatic representation of operations and components of theperception system406, according to some examples. At a high-level, theperception system406 takesraw sensor data414, as well as localization and map data, to measure and estimate the motion of nearby objects and road users, as well as to estimate the nearby environment (e.g., empty street vs. blockage). 
- More specifically, theperception system406 takes sensor data as input (along with possible usage of localization and map data) and attempts to estimate quantifiable properties of the world (e.g., traffic environment102). Examples of perception tasks include identifying the position, velocity, and geometry of other actors in atraffic environment102, and estimating attributes of atraffic environment102, such as identifying obstacles, estimating the ground surface shape, etc. 
- For themobility safety system104, operations of theperception system406 may include: 
- Object detection602: This includes identification of vehicles in atraffic environment102, using camera data to estimate the position, orientation, and geometry of objects (e.g., actors such as vehicles, pedestrians, animals etc., or other objects) within a field of view of thecamera204 in atraffic environment102.
- Orientation estimation604: Once an actor is detected, the estimate of the orientation is further refined through a comparison of object detections against the ego-vehicle IMU206 andGPS208.
- Object tracking606: By performing detection and estimation of an actor’s position and orientation across multiple frames of input sensor data, theperception system406 can track actors in atraffic environment102 over time, further estimating quantities such as object motion and instantaneous velocity.
- Ground surface estimation608: By analyzing the streetscape and scene of atraffic environment102 surrounding themobility safety system104, using theraw sensor data414 and storedenvironmental map512, theperception system406 provides estimates of the geometry of aroad106, identifying curbs, road paint, street signs, and more, in real-time or near real-time.
- Range estimation610: Themobility safety system104 analyzes ground surface results and further usesraw sensor data414 to estimate distances to actors and/or objects in atraffic environment102, identifying potential obstacles and also noting free space.
- The output of theperception system406 isobject estimate data612 that includes an estimate of objects in the traffic environment102 (including, for example, position, orientation, and short-term motion) as well as an estimate of free space in thetraffic environment102. 
- For perception values, there is generally a known value of truth, e.g., the exact position of an object at a given time or the exact location of the double yellow painted lines down the middle of the street. 
4.3. Prediction- FIG.7 is a diagrammatic representation of operations and components of theprediction component408, according to some examples. Theprediction component408 produces estimates of future motion for relevant actors in the scene, including actor-actor interactions (e.g., interactions with the ego-vehicle) as well as interactions with map elements, such as following a curving lane, avoiding an obstacle, etc. 
- As shown inFIG.7, theprediction component408 takes as inputraw sensor data414, localization estimates from the localization andmapping component404, and perception outputs from the perception system406 (or any subset thereof) and attempts to forecast the future motion of other actors and objects in atraffic environment102. Aprediction component408 may process current and past motion states of other actors invarious traffic environments102 in order to predict future motion that the actor is most likely to take. There exist multiple technical challenges with prediction operations of theprediction component408, namely that wrong answers can still be correct (e.g., the actor zigged, but it was also a perfectly good possibility that they would zag) and that an actor’s future motion can likely be altered by the future actions of other actors, including the safety system users themselves. 
- Theprediction component408 may use perception data as input, along with localization and mapping data, however some models may operate to infer prediction results directly from theraw sensor data414, using multiple observations of a scene in order to directly predict possible future outcomes. 
- Example components of theprediction component408 include: 
- Actor Motion Prediction component702: This system uses perception object estimates, sensor data, and map elements to predict how actors will move over time, through the world.
- Actor-Ego Interaction Prediction component704: This system analyzes how actors interact with each other as well as interact with the ego-vehicle (e.g., slowing down, speeding up, steering left or right to avoid one another) in order to predict likely outcomes.
- Actor-Map Interaction Prediction component706: This component predicts how actors move relative to fixed infrastructure, such as following lanes, obeying traffic rules, etc.
- The output of theprediction component408 consists of enumerating future motion possibilities for other actors in the scene, in particular focusing on safety-relevant scenarios. For a vehicle approaching the ego-vehicle from behind, this could include noting the approaching vehicle may choose to pass a slow-moving user of themobility safety system104, may decelerate to continue following the user, or may approach aggressively at speed. All three of these are likely different scenario outcomes. 
4.4. Risk Estimation- FIG.8 is a diagrammatic representation of operations and components of therisk estimation component410, according to some examples. Therisk estimation component410 is responsible for estimating the overall risk of an accident that may occur in the future, namely estimating the probability of potential collision along with the estimated severity of such a collision. 
- Therisk estimation component410 deals with the challenge of assigning probabilities to future outcomes of the world, particularly focusing on scenarios that carry a risk of harm or bodily injury to a user of themobility safety system104 or other traffic participants, as well as damage to property or goods. Risk estimation systems take as inputraw sensor data414, localization estimates from the localization andmapping component404, perception outputs from theperception system406, and prediction outputs (or any subset thereof) from theprediction component408 in order to estimate the probability and risk of certain scenarios and events. 
- Example components of therisk estimation component410 include: 
- Probability of Collision Estimator802: This component analyzes various scenarios and estimates the probability of potential collision between actors in the world, including collisions between an actor and the ego-vehicle, e.g., the user of themobility safety system104.
- Collision Severity component804: For any two actors colliding, the relative velocities, object shapes, actor types, and more may be determined, including the potential severity of a collision.
- The output of therisk estimation component410 includes an overall estimate of risk, particularly focused on the user of themobility safety system104. Therisk estimation component410 predicts this risk over a time horizon into the future, allowing themobility safety system104 to predict the probabilities of accidents before they occur. 
- Therisk estimation component410 may use perception and prediction data as input, however some models can infer probabilistic risk results directly fromraw sensor data414, using multiple observations of a scene in order to directly predict the probability of a dangerous road scenario. 
4.6.Alert Activation Component412- A further component of the compute graph consists of software control of thealert activation component412 described above. In such cases, various aspects of the system (e.g., the overall risk level) are used to determine whether alerts should be activated. For example, as a car approaches dangerously, the alerts may trigger when themobility safety system104 determines that there is a potential hazard to a user of themobility safety system104. 
- In some examples, thealert activation component412 is responsible for determining whether alerts should be triggered based on the overall risk level of a situation within thetraffic environment102. This component takes input from therisk estimation component410 and other components of the compute graph to determine whether an alert should be activated. The purpose of the alerts is to notify the user of themobility safety system104 of potential hazards in the environment, allowing them to take appropriate action to avoid accidents. 
- Thealert activation component412 may use various methods to determine when an alert should be triggered. For example, it may use a threshold-based approach where a predetermined risk level triggers an alert. Alternatively, it may use a probabilistic approach, where the probability of a potential collision is calculated, and an alert is triggered when the probability exceeds a certain threshold. 
- The type of alert triggered may depend on the severity of the risk. For example, a mild risk may trigger a first level of a visual or audio alert, while a more severe risk may trigger a second, higher level of alert, with greater visual or audible magnitude. In some examples, where themobility safety system104 has is coupled to the mobility platform control systems or wearable devices of an operator of the mobility platform, physical alerts may also be generated, such as braking or swerving themobility safety system104 to avoid a potential collision. 
- Thealert activation component412 may also take into account other factors, such as the user’s preferences, the environmental conditions, and the traffic laws in the area. Thealert activation component412 may also consider other factors, such as the user’s preferences, the environmental conditions, and the traffic laws in the area. For example, the user may have specified a preference for a specific type of alert, or the traffic laws may prohibit certain types of alerts in certain situations. 
4.5. Machine Learning Systems- For multiple components, including the localization andmapping component404, the localization andmapping component404, theperception system406, theprediction component408, therisk estimation component410, andalert activation component412, themobility safety system104 uses machine learning systems. 
- In some examples, a dataset consisting of the desired inputs and outputs of each component is used to train a model to predict outputs based upon a new, unseen input data for each of these components. In other words, in some examples, at least some of the multiple components include dedicated trained models, each of which may be subject to ongoing training as described below. 
- In some examples, a unified or partially unified trained model may be used across multiple of the components, as opposed to having dedicated trained models for each of the components. As such, themobility safety system104 manages the inputs and outputs of each component in the compute graph, such that, when paired with supervisory labels, or other supervision techniques, the onboard machine learning models are improved to handle new scenarios or to react to previous scenarios in an improved manner. 
- For example, in the case of training of arisk estimation component410, sensor data, actor positions, and velocities, and future action motion may be used to label a correct risk value for each scenario, identifying road events withintraffic environments102 that carried higher risk from those with low risk, for use in training models. 
Supervised Labels- Supervised labels may be used for training machine learning systems for themobility safety system104. In a supervised learning approach, components of themobility safety system104 are trained on a labeled dataset, where each input is associated with a desired output. These labeled datasets are created through the collection of real-world sensor data in various scenarios and environments. 
- For example, in the case of the localization andmapping component404, the desired output is an accurate measurement of the ego-vehicle motion over time, which is used for accurate localization and mapping. The labeled dataset for the localization andmapping component404 includes sensor data and corresponding accurate pose, position and orientation information for the ego-vehicle. 
- Similarly, for theperception system406, the labeled dataset includes accurately identified object locations for each set of sensor readings. Training for theprediction component408 uses a labeled dataset of future object locations over time. For training for therisk estimation component410, the labeled dataset includes highlighting specific regions of time with risk of harm or bodily injury to a user of themobility safety system104 or other traffic participants or actors. 
- During the training processes, a machine learning system learns to generalize patterns in the labeled dataset, allowing it to predict the desired output for new, unseen input data. The model may be trained iteratively, updating the model’s parameters to minimize the difference between its predictions and the true output values in the labeled dataset. 
Temporal Association of Events- In order to handle time varying elements in theprediction component408 orrisk estimation component410 of themobility safety system104, supervised labels may include information about future events. The supervised labels for therisk estimation component410 can indicate the measured risk of themobility safety system104 at a given time, but predicting future risk requires considering the temporal association between events at a given time and the level of risk experienced at a future time. When used as part of themobility safety system104, theprediction component408 predicts future risk, both in terms of magnitude as well as time horizon in which future risk may occur. 
- Some examples may handle temporal association of events by using time series analysis techniques. Time series analysis involves analyzing data collected over time, identifying patterns, trends, and changes in the data. In the context of themobility safety system104, time series analysis can be used to identify patterns in the sensor data over time, and to predict future risk based on these patterns. 
- In some examples, temporal association of events in themobility safety system104 can be addressed using various types of neural networks. Recurrent Neural Networks (RNNs) can be used to process sequential data, such as time series data, by retaining information from previous time steps to inform the current prediction. RNNs can be trained on a sequence of data to predict future events, making them a suitable solution for predicting the motion of actors in the traffic environment. 
- Another neural network architecture that can be utilized is the Convolutional Neural Network (CNN), which can be trained to recognize patterns in image data, for example. For example, a CNN can be trained to recognize pedestrians or vehicles in camera data. CNNs may be used in combination with other neural network architectures, such as RNNs, to process different types of sensor data and generate predictions. 
- In addition to RNNs and CNNs, Spatial Temporal Networks (STMs) can be used for predicting the future motion of actors in thetraffic environment102. STMs may handle both spatial and temporal information by using a 3D convolutional network to model the spatial and temporal relationships in the data. This architecture can be used to generate predictions of object trajectories and velocities over time. 
- Transformers are another neural network architecture that can be used for sequence modeling and prediction. Transformers use a self-attention mechanism to process sequential data, allowing the network to attend to different parts of the sequence and generate predictions. Transformers may be applied to themobility safety system104 to process sequential sensor data and generate predictions of future events. 
- To implement therisk estimation component410, themobility safety system104 can use a supervised learning approach where sensor data is annotated with risk levels at each time step to generate labeled data. This labeled data can be used to train a time series model or an RNN to predict future risk levels based on past sensor data. The input data may be preprocessed to extract features that are relevant to the risk estimation task, such as the velocity and orientation of objects in the scene, distances between objects, and their relative motion. 
- Convolutional neural networks (CNNs) may be used to extract features from raw sensor data, such as images or lidar point clouds. CNNs may use filters to learn features such as edges, corners, and shapes, and can be trained to identify objects and their attributes, such as size and position. The output of a CNN can be fed into an RNN to capture the temporal dependencies between frames of data and predict future risk levels. 
- Spatial-temporal models (STMs) can also be used to capture both spatial and temporal dependencies in sensor data. STMs may use 3D convolutions to learn spatial features from volumetric data, such as a sequence of lidar point clouds, and use RNNs to capture temporal dependencies between frames. STMs can be trained end-to-end to predict future risk levels from raw sensor data. 
- Transformers are another type of neural network that can capture temporal dependencies in sequential data. Transformers use attention mechanisms to focus on different parts of the input sequence and can be trained on large amounts of data to predict future risk levels. Transformers have been shown to be effective in natural language processing tasks, but can also be applied to sequential sensor data. 
- By considering the temporal association of events, themobility safety system104 can provide accurate and reliable risk estimations, leading to better safety outcomes for users of the system. The trained model can be used in real-time to estimate the risk of potential collisions and activate alerts if necessary to avoid accidents. 
Training ML Systems Based Upon Future Risk- For training of the machine learning systems for use with themobility safety system104, temporal association may be used to identify times at which risk increases in the future, thus helping the machine learning systems identify scenarios where current events predict the future high risk states. 
Unsupervised and Semi-Supervised Learning- Unsupervised and semi-supervised learning are techniques that, in some examples, can be employed to train the machine learning systems of themobility safety system104. In contrast to supervised learning, which relies on labeled datasets, unsupervised learning can be used to identify patterns and structures in unlabeled data. For example, clustering algorithms can be used to group sensor data into similar clusters based on the data’s similarity. These clusters can then be used as the basis for training machine learning models. 
- Semi-supervised learning is also an approach that can be used to train machine learning models on a relatively smaller amount of labeled data is available. This method combines both labeled and unlabeled data to improve the accuracy of the machine learning models. In the context of themobility safety system104, this involves using some labeled data, such as data generated during supervised training, along with large amounts of unlabeled data collected during normal operation of multiplemobility safety systems104 and other sensors withtraffic environments102. 
- By using unsupervised and semi-supervised learning, it is possible to expand the amount of data it uses to train its machine learning systems, resulting in more robust and accurate predictions. This additional data can be used to refine the localization and mapping component404 aperception system406, aprediction component408, and arisk estimation component410, improving performance and ensuring the safety of users of themobility safety system104. 
Unified Model- Themobility safety system104 is composed of several individual machine learning systems, each with a specific input and output, such as the localization andmapping component404, theperception system406, theprediction component408, and therisk estimation component410. In some examples, at least some of these individual trained models may be combined to produce unified models with fewer intermediary steps, which can reduce computation and model sizes while maintaining equivalent output quality. 
- To create a unified model, individual sequential models are replaced by a single, larger model that encompasses the inputs and outputs of the sum total sequential system. 
- Using a unified model may require additional training data to cover the expanded input-output space. This additional training data can be obtained using semi-supervised or unsupervised learning techniques described above, which can produce large datasets of example events to learn from. These techniques can also help improve the overall accuracy and robustness of the unified model. 
AI Accelerator Chip212- When performing machine learning inference on themobility safety system104, ML systems may use theAI accelerator chip212 previously described herein to perform fast calculations as part of the compute graph of the application layer. 
4.5. Software Infrastructure- As a basis for the application layer of themobility safety system104, themobility safety system104 includes an operating system (e.g., operating system1202), including device drivers (e.g., drivers1204) for sensors (e.g., sensors202) as well as the audio andvisual alert mechanisms214. In addition, interfaces are provided to send sensor data and other derived quantities to the onboard data storage (e.g., memory1206). 
- The software infrastructure supports additional modalities for which themobility safety system104 is useful. For example, by recording video directly to the data storage on the embedded computer of themobility safety system104, themobility safety system104 provides functionality focused on capturing road events of interest to a bicyclist. 
5. User Interactions- A user can interact with themobility safety system104 in a variety of ways. 
5.1. Mounting theMobility Safety System104- Bracketry may be used to locate themobility safety system104 at a point on a bicycle where it can see the roadway in a chosen direction from the bicycle. A quick release feature allows themobility safety system104 to be easily removed from the mounting bracket for battery charging and other purposes. 
5.2. Enabling and Disabling theMobility Safety System104- Themobility safety system104 may be powered on via a single button, pressed by the user to turn themobility safety system104 on and off. When stationary, themobility safety system104 can use itssensors202 to determine its nearby environment, entering a power-saving mode when sensor processing is not required. 
- Furthermore, the user is able to set specific geolocations as privacy locations, for example a home or office. When at these locations, themobility safety system104 disables camera functionality, in order to preserve privacy as desired by a user. In addition, simple obscurants such as a lens cap can be used. 
- When detected to be in motion or in an appropriate environment, such as on a street, themobility safety system104 begins monitoring its surroundings for hazards, thus enabling the various features of its software architecture. 
5.3. Alerts and Warnings- When a potential hazard is detected, themobility safety system104 provides alerts in various ways. For hazards that are mere inconveniences or rare possibilities, themobility safety system104 changes the blinking pattern of its lights and emits a loud warning noise via an onboard speaker. This noise warns the user of themobility safety system104, as well as any nearby traffic participants, of the potential hazard. 
- For hazards that require immediate attention, for example a threat of a collision within the next few seconds, themobility safety system104 enables a much louder alarm, thus ensuring the alert is received by occupants of nearby vehicles with closed doors and windows. 
5.4. Interaction via External Device- A user can use an external device, such as a smartphone or computer, with or as part of themobility safety system104 through either wireless communication (such as Bluetooth or ANT+) or via the USB-C port located on themobility safety system104. 
- When used in this manner, the resulting output data from an autonomy system of themobility safety system104 may be transmitted to the external device. Example usages include: 
- A device displaying real-time images andraw sensor data414 for a user to view.
- A device displaying computer-generated visualizations of nearby objects and hazards, for the user to view.
- An external method of providing alarms to the user, visually showing the user the source of a hazard through the generated display.
- FIG.9 is a flowchart illustrating amethod900, according to some examples, of operating to amobility safety system104 to provide alerts to actors within atraffic environment102. 
- Atblock902, themobility safety system104 captures real-time data from its sensors202 (e.g., IMU, camera) relating to conditions within atraffic environment102. The sensor data is fed into theonboard compute modules218, which consists of an embeddedcomputer210 and anAI accelerator chip212. 
- Atblock1000, themobility safety system104 uses at least one trained model to generate a risk estimation related to a mobility platform within thetraffic environment102. This may include using machine learning systems for various components of themobility safety system104, including the localization andmapping component404, theperception system406, theprediction component408, and therisk estimation component410. The trained models may be developed based on supervised, semi-supervised or unsupervised learning methods, and take into account the temporal association of events. Further details regarding example operations that may be performed atblock1000 are described below with reference toFIG.10. 
- Inblock904, themobility safety system104 generates a first alert based on the risk estimation to notify the operator of the mobility platform of any potential hazards or risky situations. This alert may be in the form of an audible or visual signal, such as an alarm sound or flashing lights on the mobility platform. The alert may also include haptic feedback, such as vibrations or force feedback, to provide the operator with a physical warning. 
- The generation of the first alert is performed by thealert mechanisms214 of themobility safety system104, which receives input from therisk estimation component410. Therisk estimation component410 uses machine learning techniques to estimate the level of risk based on real-time data captured from thesensors202. Once the level of risk exceeds a certain threshold, for example, thealert mechanism214 is triggered to generate the first alert directed at the operator of the mobility platform. 
- Inblock906, themobility safety system104 generates a second alert directed at an operator of a further actor within thetraffic environment102. This alert may be used to notify nearby vehicles, pedestrians, or other traffic actors of the presence and position of the mobility platform, and to warn them of any potential hazards. The alert may also be used to signal to other traffic actors the intended movements of the mobility platform, such as when turning the mobility platform or changing lanes. 
- The generation of the second alert or alerts is performed by thealert mechanisms214 of themobility safety system104. These alerts may be visual and/or audible alerts, as detailed above. 
- Thealert mechanisms214 may also enablemobility safety systems104 to communicate with other devices in thetraffic environment102, such as nearby vehicles or pedestrians, using onboard wireless communication via Wi-Fi, Bluetooth, or LoRa. Thealert mechanisms214 receive input from therisk estimation component410, which estimates the level of risk and identifies any potential hazards in the traffic environment. Based on this information, thealert mechanism214 generates the second alert(s) to warn other traffic actors of the presence and position of the mobility platform and to promote safe interaction between the mobility platform and other actors in thetraffic environment102. 
- Atblock908, themobility safety system104 selectively records real-time data related to traffic conditions in thetraffic environment102 based on the risk estimation generated inblock1000. Themobility safety system104 may use one or more data storage components such as hard disk drives, solid-state drives, or flash memory cards to store the recorded data. These data storage components may be connected to the embeddedcomputer210 via input/output interfaces such as Universal Serial Bus (USB), Ethernet, or General Purpose Input/Output (GPIO) pins. 
- The recorded data may include sensor data collected by thesensors202, as well as any other data relevant to thetraffic environment102, such as weather conditions or traffic patterns. The data may be recorded in a compressed or uncompressed format, depending on the available storage capacity and processing capabilities of themobility safety system104. 
- Selective recording allows themobility safety system104 to conserve storage space and processing resources by recording only the data that is most relevant to the risk estimation generated inblock1000. For example, if the risk estimation indicates a high probability of a collision with a pedestrian, themobility safety system104 may selectively record only the sensor data related to the pedestrian’s location and movement, rather than recording all sensor data collected by themobility safety system104. 
- The recorded data may be used for offline analysis, such as training machine learning models or identifying areas for improvement in themobility safety system104. Additionally, the data may be used for forensic analysis in the event of an accident or incident involving themobility safety system104. 
- Selective recording of real-time data related to traffic conditions allows themobility safety system104 to optimize its use of storage and processing resources while still capturing the most relevant data for improving safety in thetraffic environment102. 
- Atblock910, themobility safety system104 selectively performs computations based on the risk estimation generated inblock1000. These computations may include a wide range of actions to help mitigate the risk and ensure the safety of the operator of the mobility platform, as well as other actors within thetraffic environment102. 
- In some examples, based on the risk estimation produced by therisk estimation component410, themobility safety system104 can selectively perform computations by activating or deactivating specific components. For example, if the risk estimation indicates a high likelihood of collision, themobility safety system104 can activate thealert activation component412 to generate a warning for the operator of the mobility platform or other actors in the traffic environment. 
- Similarly, if the risk estimation indicates that the mobility platform needs to adjust its behavior, such as reducing speed or adjusting course, themobility safety system104 can selectively activate the appropriate component or components to achieve the desired behavior. For instance, theprediction component408 can be activated to estimate the future motion of the mobility device and other traffic participants, and the localization andmapping component404 can be activated to update the map and location of the mobility platform in real-time. Based on this information, themobility safety system104 can then selectively activate the appropriate actuators (not shown inFIG.4) to adjust the behavior of the mobility platform accordingly. 
- Another computation that may be performed is generating further alerts to be directed at the operator of the mobility platform or other actors within thetraffic environment102. For example, if the risk estimation indicates that a pedestrian is crossing the path of the mobility platform, themobility safety system104 may generate an additional alert to warn the operator to take evasive action. These alerts may be generated via onboard audio or visual displays, or through external communication channels such as Wi-Fi, Bluetooth, or LoRa. 
- In addition to adjusting the behavior of the mobility platform and generating alerts, themobility safety system104 may provide feedback to the operator of the device via a user interface for example on a mobile device or bike computer. For example, the user interface may display information about the current risk level within thetraffic environment102, as well as suggestions for actions to take to reduce that risk. This feedback may assist in ensuring that the operator of the mobility device is aware of potential hazards and can take appropriate actions to avoid them. 
- These computations may be selectively performed based on the risk estimation generated inblock1000, which takes into account the real-time data captured by thesensors202 and processed by the various machine learning systems onboard themobility safety system104. The result is a highly responsive and adaptive system that can help ensure the safety of the operator of the mobility platform as well as other actors within thetraffic environment102. By selectively activating and deactivating specific components based on the risk estimation, themobility safety system104 can optimize its computational resources, reduce latency, and minimize power consumption. This contributes to the efficiency of themobility safety system104 while avoiding compromising the safety of the mobility device and its operator, as well as other actors in the traffic environment. 
- Overall, the flowchart ofFIG.9 describes the operation of themobility safety system104 in real-time, using a combination of sensor data, machine learning models, and alert mechanisms to keep operators of mobility platforms safe while navigating traffic environments. 
- FIG.10 is a flowchart illustrating for the operations that may be performed atblock1000 by themobility safety system104 in order to generate a risk estimate, according to some examples. 
- Inblock1002, the localization andmapping component404 performs a localization and mapping operation using the real-time data to generate localization and mapping data. In some examples, the localization andmapping component404 of themobility safety system104 operates to estimate the position and orientation of the mobility platform within thetraffic environment102, and generating a data picture thetraffic environment102 to identify nearby objects in thistraffic environment102. The localization andmapping component404 continuously refines the estimates of the ego-motion of the user, usingraw sensor data414, such asGPS data502 from theGPS208 andIMU data504 from theIMU206, which provide real-time readings on how themobility safety system104 is moving over time, as measured by accelerometers and gyroscopes. 
- The localization andmapping component404 may use an iterative estimation method, such as a Kalman Filter, to provide real-time estimates of the position and orientation of themobility safety system104, as well as derivatives of these quantities, including velocity, angular rates, and linear and angular accelerations. In cases where themobility safety system104 is rigidly attached to a mobility platform (e.g., bicycle, scooter, or similar), a motion model of the mobility platform is further used as input to thepose estimation system506 through the Mobilityplatform model component508. 
- In addition to the sensors and the pose estimation system, the localization andmapping component404 also uses odometry based on exteroceptive sensors (e.g., cameras204) through theVisual Odometry components510 as input to thepose estimation system506. 
- Theenvironmental map512 is continuously updated with data about the surrounding environment, such as ground surface geometry, road markings, and curb/sidewalk locations, to improve localization capabilities. An existing map can be used to refine the localization andmapping component404, while localized positions can be used to refine theenvironmental map512. 
- Atblock1004, themobility safety system104 performs a perception operation using the real-time data and the localization and mapping data generated inblock1000 to generate object tracking and range estimation data. Theperception system406 takesraw sensor data414 fromvarious sensors202, such ascameras204, along with localization and map data, to estimate the motion of nearby objects and road users, as well as to estimate thenearby traffic environment102. 
- Theperception system406 performs several tasks, includingobject detection602,orientation estimation604, object tracking606,ground surface estimation608, andrange estimation610. Object detection involves identifying vehicles and other objects in thetraffic environment102, using camera data to estimate the position, orientation, and geometry of objects within a field of view of thecamera204. Once an actor is detected, the estimate of the orientation is further refined through a comparison of object detections against the mobility safety system’sIMU206 andGPS208. 
- Object tracking involves performing detection and estimation of an actor’s position and orientation across multiple frames of input sensor data, which allows theperception system406 to track actors in atraffic environment102 over time, further estimating quantities such as object motion and instantaneous velocity. 
- Ground surface estimation involves analyzing the streetscape and scene of atraffic environment102 surrounding themobility safety system104, using theraw sensor data414 and storedenvironmental map512, to provide estimates of the geometry of aroad106, identifying curbs, road paint, street signs, and more, in real-time or near real-time. 
- Range estimation involves using the ground surface results and furtherraw sensor data414 to estimate distances to actors and/or objects in atraffic environment102, identifying potential obstacles and also noting free space. 
- The output of theperception system406 isobject estimate data612 that includes an estimate of objects in the traffic environment102 (including, for example, position, orientation, and short-term motion) as well as an estimate of free space in thetraffic environment102. By using the data generated by theperception system406, themobility safety system104 can perform more accurate and effective computations in order to provide safer travel for the user. 
- Atblock1006, theprediction component408 performs a prediction operation real-time data using the localization and mapping data and the object tracking and range estimation data to generate prediction data related to the traffic environment. In some examples, theprediction component408 of themobility safety system104 uses data from the localization andmapping component404 and the object tracking and range estimation data generated by theperception system406 to forecast future motion of relevant actors in thetraffic environment102. Theprediction component408 may use a combination of machine learning and probabilistic modeling to generate multiple possible future trajectories for other actors in the scene. 
- The actormotion prediction component702 of theprediction component408 uses perception object estimates, sensor data, and map elements to predict how actors will move over time, through the world. This is achieved through machine learning algorithms such as recurrent neural networks (RNNs) or long short-term memory (LSTM) networks. These networks are trained on large amounts of data to learn patterns of motion and behavior of different types of actors in different traffic scenarios. Once trained, the networks can predict future motion based on current and past observations of the actor’s behavior and the environment around them. 
- The actor-egointeraction prediction component704 of theprediction component408 analyzes how actors interact with each other and with themobility safety system104, such as slowing down, speeding up, or changing direction to avoid collisions. The actor-egointeraction prediction component704 also uses machine learning algorithms to predict likely outcomes of these interactions. For example, if themobility safety system104 is approaching a pedestrian who is looking at their phone and not paying attention to the road, the actor-egointeraction prediction component704 may predict that the pedestrian is likely to continue walking in their current path and that themobility safety system104 needs to take evasive action. 
- The actor-mapinteraction prediction component706 of theprediction component408 predicts how actors move relative to fixed infrastructure, such as following lanes or obeying traffic rules. The actor-mapinteraction prediction component706 uses map data from the localization andmapping component404 to make predictions about how actors will interact with thetraffic environment102 around them. For example, if themobility safety system104 is approaching a roundabout, the actor-mapinteraction prediction component706 may predict that a car entering the roundabout from the right is likely to continue moving in a clockwise direction and that themobility safety system104 should wait for a safe opportunity to enter the roundabout. 
- The output of theprediction component408 consists of multiple possible future outcomes and scenarios for actors in the scene, each with a corresponding probability of occurrence. These safety-relevant scenarios are used by therisk estimation component410 to estimate the likelihood of a collision or other safety incident. Theprediction component408 continuously updates its predictions based on new sensor data and adjusts the probabilities of outcomes as more information becomes available. The prediction data is then used by therisk estimation component410. 
- Atblock1008, therisk estimation component410 performs the risk estimation operation, to generate the risk estimation value, using the real-time data, the localization and mapping data, the object tracking and range estimation data, and the prediction data. 
- In some examples, the risk estimation operation performed by therisk estimation component410 atblock1008 takes the output from the previous localization and mapping, perception, and prediction operations, as well as the raw sensor data, to estimate the probability and severity of a potential collision or dangerous scenario in the future. Therisk estimation component410 is responsible for estimating the overall risk of an accident that may occur in the future, including estimating the probability of potential collision and the estimated severity of such a collision. 
- The probability ofcollision estimator802 analyzes various scenarios and estimates the probability of potential collision between actors in the world, including collisions. The probability ofcollision estimator802 considers the position, velocity, and orientation of nearby objects, as well as the motion of themobility safety system104, to estimate the probability of collision. Thecollision severity component804 then takes the estimated collision probability and determines the potential severity of the collision based on factors such as relative velocities, object shapes, and actor types. 
- Therisk estimation component410 predicts the overall risk over a time horizon into the future, allowing themobility safety system104 to predict the probabilities of accidents before they occur. This may be achieved through probabilistic modeling of potential future outcomes, including the possibility of dangerous scenarios such as a collision with a pedestrian or another vehicle. Therisk estimation component410 outputs an estimate of the overall risk, which is used by themobility safety system104 to inform its decision-making and control actions. Therisk estimation component410 may use perception and prediction data as input, however, some models can infer probabilistic risk results directly fromraw sensor data414, using multiple observations of a scene in order to directly predict the probability of a dangerous road scenario. 
6. Scenarios- FIG.11 illustrates scenarios of how themobility safety system104 may operate, to add a layer of safety for traffic actors, according to some examples. 
6.1. Approaching Vehicle From Behind- In afirst scenario1102, themobility safety system104 uses itsrisk estimation component410 to differentiate between scenarios, for example when an approachingvehicle1104 safely navigates around a bicyclist1106 (left), vs. when avehicle1104 approaches in an unsafe manner (right). In the latter scenario, themobility safety system104 sends warnings to capture the attention of both the driver of thevehicle1104 and thebicyclist1106. 
- More specifically, in thefirst scenario1102, consider a fast-movingvehicle1104 approaches abicyclist1106, carrying amobility safety system104, from behind. In this scenario, the autonomy system of themobility safety system104 detects the fast-movingvehicle1104 and tracks it over time as it approaches thebicyclist1106. Theprediction component408 enumerates future possible scenarios, including: the approachingvehicle1104 will slow to accommodate thebicyclist1106; thevehicle1104 will nudge left or change lanes to safely pass thebicyclist1106; and finally, the approachingvehicle1104 does not adjust its course and continues to proceed at speed toward thebicyclist1106. 
- When continuously estimating the overall risk of collision for the user, themobility safety system104 assigns estimated probabilities to each of these scenarios. When the probability of the most dangerous scenarios reaches a critical threshold, warnings and alerts are emitted by thealert mechanisms214 of themobility safety system104. 
6.2. Bicyclist Warning to Pedestrians- In asecond scenario1108, the autonomy system of themobility safety system104 automatically detectspedestrians1110 on a shareduse path1112. By automatically emitting warnings, to alert thepedestrian1110 to the approachingbicyclist1114, themobility safety system104 provides a fully autonomous replacement to the common bike bell. 
- More specifically, in thesecond scenario1108, consider thebicyclist1114 overtaking apedestrian1110, both on a shared-use path1112. In this scenario, themobility safety system104 is mounted withsensors202 pointing forward, in the direction of travel. In scenarios like this, bicyclists914 often use bike bells or call out advance warnings such as “on your left” when passing. 
- The integration of themobility safety system104 adds autonomous operation to this warning system, using the application software to identify apedestrian1110 in advance, determine the risk of a potential collision, and sounding chimes and alarms to provide situational awareness to both the pedestrian and the bicyclist, and ensuring neither party is surprised by the others’ actions. 
6.3. Bicyclist Pursued by Canine- In athird scenario1116, themobility safety system104 is also capable of detecting non-human actors in a scene, for instance detecting, tracking, and estimating risk due to a canine ordog1118 that gives chase to a passing bicyclist1120. By using thesame alert mechanisms214 for other vehicles, themobility safety system104 can emit warnings to discourage thedog1118 from continuing to chase the bicyclist1120. 
- More specifically, a common danger tobicyclists1106 is being chased bydogs1118, oftentimes where adog1118 might believe it is guarding its territory, despite thebicyclist1106 traveling on shared roads or paths. In this scenario, the algorithms of themobility safety system104 are capable of detecting an animal giving chase to a bicyclist1120. By tracking and monitoring the relative positions and velocities of all actors, themobility safety system104 can determine overall risk, and attempt to dissuade the animal by using alert mechanisms, using the samealarm alert mechanism214 as for vehicles, but also providing warnings at ultrasonic frequencies heard by dogs but not by the human ear. 
6.3. Automated Obscurant Detection- A camera lens (or other viewport of an exteroceptive sensor) can become obscured, typically through dirt, debris, and moisture from the environment. As such, the software algorithms of themobility safety system104 can be used to determine when the sensor (e.g., the camera204) does not have an unobstructed view. In such a case, themobility safety system104 can alert the user, thus allowing a manual inspection of the lens and intervention to clean off any obscurants. 
6.4. Lane Deviation Warning- In some examples, themobility safety system104 can provide a lane deviation warning to the user. Theperception system406 is responsible for analyzingraw sensor data414 and determining the position of themobility safety system104 with respect to lane markings. In the event that themobility safety system104 deviates from its lane, theperception system406 will trigger a warning message that is sent to the user through thealert mechanisms214. The warning message may be a visual or audible alert, depending on the user’s preferences. 
- The localization andmapping component404 provides critical input for the lane deviation warning system. It provides an accurate estimate of the user’s position and orientation with respect to the road, allowing theperception system406 to accurately determine the position of themobility safety system104 relative to the lane markings. 
6.5. Intersection Collision Avoidance- In some examples, themobility safety system104 can provide collision avoidance at an intersection. Theperception system406 is responsible for detecting other vehicles approaching the intersection and estimating their trajectory. Theprediction component408 can then generate multiple possible future scenarios, including those where the other vehicles may not be able to stop in time to avoid a collision. 
- If themobility safety system104 detects a potential collision, it will trigger thealert mechanisms214 to warn the user of the danger. The alert message may be a visual or audible alert, depending on the user’s preferences. 
- The localization andmapping component404 provides critical input for the intersection collision avoidance system. It provides an accurate estimate of the user’s position and orientation with respect to the intersection, allowing theperception system406 to accurately detect other vehicles and estimate their trajectory. 
6.6. Emergency Stop- In some examples, themobility safety system104 can provide an emergency stop capability in the event of an imminent collision. Theprediction component408 continuously monitorsraw sensor data414 and estimates the probability of a collision. If the probability of a collision reaches a critical threshold, themobility safety system104 will trigger an emergency stop mechanism. 
- The emergency stop mechanism can be implemented in several ways, depending on the specific mobility platform. For example, a bicycle may have a brake system that is automatically engaged when the emergency stop mechanism is triggered. A scooter or motorbike may have a similar brake system, or may be designed to slow down or stop more gradually to avoid loss of control. 
- The localization andmapping component404 provides critical input for the emergency stop system. It provides an accurate estimate of the user’s position and orientation, allowing theprediction component408 to accurately estimate the probability of a collision. 
- FIG.12 is a block diagram1200 illustrating asoftware architecture1208, which can be installed on any one or more of the devices (e.g., the safety system) described herein. Thesoftware architecture1208 is supported by hardware such as amachine1210 that includesprocessors1212,memory1206, and I/O components1214. In this example, thesoftware architecture1208 can be conceptualized as a stack of layers, where each layer provides a particular functionality. Thesoftware architecture1208 includes layers such as anoperating system1202, libraries1216,frameworks1218, and applications1220.Operationally, theapplications1220 invokeAPI calls1222 through the software stack and receivemessages1224 in response to the API calls1222. 
- Theoperating system1202 manages hardware resources and provides common services. Theoperating system1202 includes, for example, akernel1226,services1228, anddrivers1204. Thekernel1226 acts as an abstraction layer between the hardware and the other software layers. For example, thekernel1226 provides memory management, Processor management (e.g., scheduling), component management, networking, and security settings, among other functionalities. Theservices1228 can provide other common services for the other software layers. Thedrivers1204 are responsible for controlling or interfacing with the underlying hardware. For instance, thedrivers1204 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, and power management drivers. 
- The libraries1216 provide a low-level common infrastructure used by the applications1220.The libraries1216 can include system libraries1230 (e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1216 can includeAPI libraries1232 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., Web Kit to provide web browsing functionality), and the like. The libraries1216 can also include a wide variety ofother libraries1234 to provide many other APIs to theapplications1220. 
- Theframeworks1218 provide a high-level common infrastructure used by theapplications1220. For example, theframeworks1218 provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. Theframeworks1218 can provide a broad spectrum of other APIs that can be used by theapplications1220, some of which may be specific to a particular operating system or platform. 
- In some examples, theapplications1220 may include ahome application1236, acontacts application1238, abrowser application1240, abook reader application1242, alocation application1244, amedia application1246, amessaging application1248, agame application1250, and a broad assortment of other applications such as a third-party application1252.Theapplications1220 are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of theapplications1220, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language).In a specific example, the third-party application1252 (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1252 can invoke the API calls1222 provided by theoperating system1202 to facilitate functionality described herein. 
- FIG.13 is a diagrammatic representation of the machine1300 (e.g., the safety system described here) within which instructions1302 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing themachine1300 to perform any one or more of the methodologies discussed herein may be executed. For example, theinstructions1302 may cause themachine1300 to execute any one or more of the methods described herein. Theinstructions1302 transform the general,non-programmed machine1300 into aparticular machine1300 programmed to carry out the described and illustrated functions in the manner described. Themachine1300 may operate as a standalone device or be coupled (e.g., networked) to other machines. In a networked deployment, themachine1300 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. Themachine1300 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing theinstructions1302, sequentially or otherwise, that specify actions to be taken by themachine1300. Further, while asingle machine1300 is illustrated, the term “machine” may include a collection of machines that individually or jointly execute theinstructions1302 to perform any one or more of the methodologies discussed herein. 
- Themachine1300 may includeprocessors1304,memory1306, and I/O components1308, which may be configured to communicate via abus1310. In some examples, the processors1304 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another Processor, or any suitable combination thereof) may include, for example, aProcessor1312 and aProcessor1314 that execute theinstructions1302. The term “Processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.13 showsmultiple processors1304, themachine1300 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. 
- Thememory1306 includes amain memory1316, astatic memory1318, and astorage unit1320, both accessible to theprocessors1304 via thebus1310. Themain memory1306, thestatic memory1318, andstorage unit1320 store theinstructions1302 embodying any one or more of the methodologies or functions described herein. Theinstructions1302 may also reside, wholly or partially, within themain memory1316, within thestatic memory1318, within machine-readable medium1322 within thestorage unit1320, within the processors1304 (e.g., within the processor’s cache memory), or any suitable combination thereof, during execution thereof by themachine1300. 
- The I/O components1308 may include various components to receive input, provide output, produce output, transmit information, exchange information, or capture measurements. The specific I/O components1308 included in a particular machine depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. The I/O components1308 may include many other components not shown inFIG.13. In various examples, the I/O components1308 may includeoutput components1324 andinput components1326. Theoutput components1324 may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), or other signal generators. Theinput components1326 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. 
- In further examples, the I/O components1308 may includebiometric components1328,motion components1330,environmental components1332, orposition components1334, among a wide array of other components. For example, thebiometric components1328 include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), or identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification). Themotion components1330 include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). Theenvironmental components1332 include, for example, one or cameras, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. Theposition components1334 include location sensor components (e.g., a Global Positioning System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. 
- Communication may be implemented using a wide variety of technologies. The I/O components1308 further includecommunication components1336 operable to couple themachine1300 to anetwork1338 ordevices1340 via respective coupling or connections. For example, thecommunication components1336 may include a network interface Component or another suitable device to interface with thenetwork1338. In further examples, thecommunication components1336 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. Thedevices1340 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). 
- Moreover, thecommunication components1336 may detect identifiers or include components operable to detect identifiers. For example, thecommunication components1336 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Data glyph, Maxi Code, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via thecommunication components1336, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, or location via detecting an NFC beacon signal that may indicate a particular location. 
- The various memories (e.g.,main memory1316,static memory1318, and/or memory of the processors1304) and/orstorage unit1320 may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions1302), when executed byprocessors1304, cause various operations to implement the disclosed examples. 
- Theinstructions1302 may be transmitted or received over thenetwork1338, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components1336) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, theinstructions1302 may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to thedevices1340. 
MACHINE-LEARNING PIPELINE1500- FIG.14 is a flowchart depicting a machine-learning pipeline1500, according to some examples, which may be used to train one or more machine learning models used in or withmobility safety systems104. The machine-learning pipeline1500 may be used to generate a trained model, for example the trained machine-learning program1502 ofFIG.15, described herein to perform operations associated with searches and query responses, which can be deployed with amobility safety system104 
Overview- Broadly, machine learning may involve using computer algorithms to automatically learn patterns and relationships in data, potentially without the need for explicit programming. Machine learning algorithms can be divided into three main categories: supervised learning, unsupervised learning, and reinforcement learning. 
- Supervised learning involves training a model using labeled data to predict an output for new, unseen inputs. Examples of supervised learning algorithms include linear regression, decision trees, and neural networks.
- Unsupervised learning involves training a model on unlabeled data to find hidden patterns and relationships in the data. Examples of unsupervised learning algorithms include clustering, principal component analysis, and generative models like autoencoders.
- Reinforcement learning involves training a model to make decisions in a dynamic environment by receiving feedback in the form of rewards or penalties. Examples of reinforcement learning algorithms include Q-learning and policy gradient methods.
- Examples of specific machine learning algorithms that may be deployed, according to some examples, include logistic regression, which is a type of supervised learning algorithm used for binary classification tasks. Logistic regression models the probability of a binary response variable based on one or more predictor variables. Another example type of machine learning algorithm is Naïve Bayes, which is another supervised learning algorithm used for classification tasks. Naïve Bayes is based on Bayes’ theorem and assumes that the predictor variables are independent of each other. Random Forest is another type of supervised learning algorithm used for classification, regression, and other tasks. Random Forest builds a collection of decision trees and combines their outputs to make predictions. Further examples include neural networks which consist of interconnected layers of nodes (or neurons) that process information and make predictions based on the input data. Matrix factorization is another type of machine learning algorithm used for recommender systems and other tasks. Matrix factorization decomposes a matrix into two or more matrices to uncover hidden patterns or relationships in the data. Support Vector Machines (SVM) are a type of supervised learning algorithm used for classification, regression, and other tasks. SVM finds a hyperplane that separates the different classes in the data. Other types of machine learning algorithms include decision trees, k-nearest neighbors, clustering algorithms, and deep learning algorithms such as convolutional neural networks (CNN), recurrent neural networks (RNN), and transformer models. The choice of algorithm depends on the nature of the data, the complexity of the problem, and the performance requirements of the application. 
- The performance of machine learning models is typically evaluated on a separate test set of data that was not used during training to ensure that the model can generalize to new, unseen data. 
- Although several specific examples of machine learning algorithms are discussed herein, the principles discussed herein can be applied to other machine learning algorithms as well. Deep learning algorithms such as convolutional neural networks, recurrent neural networks, and transformers, as well as more traditional machine learning algorithms like decision trees, random forests, and gradient boosting may be used in various machine learning applications. 
- Two example types of problems in machine learning are classification problems and regression problems. Classification problems, also referred to as categorization problems, aim at classifying items into one of several category values (for example, is this object an apple or an orange?). Regression algorithms aim at quantifying some items (for example, by providing a value that is a real number). 
Phases- Generating a trained machine-learning program1502 may include multiple types of phases that form part of the machine-learning pipeline1500, including for example the following phases illustrated inFIG.14: 
- Data collection and preprocessing1402: This may include acquiring and cleaning data to ensure that it is suitable for use in the machine learning model. This may also include removing duplicates, handling missing values, and converting data into a suitable format.
- Feature engineering1404: This may include selecting and transforming thetraining data1504 to create features that are useful for predicting the target variable. Feature engineering may include (1) receiving features1506 (e.g., as structured or labeled data in supervised learning) and/or (2) identifying features1506 (e.g., unstructured or unlabeled data for unsupervised learning) intraining data1504.
- Model selection and training1406: This may include selecting an appropriate machine learning algorithm and training it on the preprocessed data. This may further involve splitting the data into training and testing sets, using cross-validation to evaluate the model, and tuning hyperparameters to improve performance.
- Model evaluation1408: This may include evaluating the performance of a trained model (e.g., the trained machine-learning program1502) on a separate testing dataset. This can help determine if the model is overfitting or underfitting and if it is suitable for deployment.
- Prediction1410: This involves using a trained model (e.g., trained machine-learning program1502) by theprediction component408 to generate predictions on new, unseen data.
- Validation, refinement or retraining1412: This may include updating a model based on feedback generated from the prediction phase, such as new data or user feedback.
- Deployment1414: This may include integrating the trained model (e.g., the trained machine-learning program1502) into a larger system or application, such as amobility safety system104. This can involve setting up APIs, building a user interface, and ensuring that the model is scalable and can handle large volumes of data
- FIG.15 illustrates further details two example phases, namely a training phase1508 (part of the model selection and trainings1406) and a prediction phase1510 (part of prediction1410). Prior to thetraining phase1508,feature engineering1404 is used to identifyfeatures1506. This may include identifying informative, discriminating, and independent features for the effective operation of the trained machine-learning program1502 in pattern recognition, classification, and regression. In some examples, thetraining data1504 includes labeled data, which is known data forpre-identified features1506 and one or more outcomes. Each of thefeatures1506 may be a variable or attribute, such as individual measurable property of a process, article, system, or phenomenon represented by a data set (e.g., the training data1504).Features1506 may also be of different types, such as numeric features, strings, and graphs, and may include one or more ofcontent1512,concepts1514, attributes1516,historical data1518 and/oruser data1520, merely for example. 
- Intraining phases1508, the machine-learning pipeline1500 uses thetraining data1504 to find correlations among thefeatures1506 that affect a predicted outcome or prediction/inference data1522. 
- With thetraining data1504 and the identified features1506, the trained machine-learning program1502 is trained during thetraining phase1508 during machine-learning program training1524. The machine-learning program training1524 appraises values of thefeatures1506 as they correlate to thetraining data1504. The result of the training is the trained machine-learning program1502 (e.g., a trained or learned model). 
- Further, thetraining phase1508 may involve machine learning, in which thetraining data1504 is structured (e.g., labeled during preprocessing operations), and the trained machine-learning program1502 implements a relatively simpleneural network1526 capable of performing, for example, classification and clustering operations. In other examples, thetraining phase1508 may involve deep learning, in which thetraining data1504 is unstructured, and the trained machine-learning program1502 implements a deepneural network1526 that is able to perform both feature extraction and classification/clustering operations. 
- Aneural network1526 may, in some examples, be generated during thetraining phase1508, and implemented within the trained machine-learning program1502. Theneural network1526 includes a hierarchical (e.g., layered) organization of neurons, with each layer consisting of multiple neurons or nodes. Neurons in the input layer receive the input data, while neurons in the output layer produce the final output of the network. Between the input and output layers, there may be one or more hidden layers, each consisting of multiple neurons. 
- Each neuron in theneural network1526 operationally computes a small function, such as an activation function, which takes as input the weighted sum of the outputs of the neurons in the previous layer, as well as a bias term. The output of this function is then passed as input to the neurons in the next layer. If the output of the activation function exceeds a certain threshold, an output is communicated from that neuron (e.g., transmitting neuron) to a connected neuron (e.g., receiving neuron) in successive layers. The connections between neurons have associated weights, which define the influence of the input from a transmitting neuron to a receiving neuron. During the training phase, these weights are adjusted by the learning algorithm to optimize the performance of the network. Different types of neural networks may use different activation functions and learning algorithms, which can affect their performance on different tasks. Overall, the layered organization of neurons and the use of activation functions and weights enable neural networks to model complex relationships between inputs and outputs, and to generalize to new inputs that were not seen during training. 
- In some examples, theneural network1526 may also be one of a number of different types of neural networks, such as a single-layer feed-forward network, a Multilayer Perceptron (MLP), an Artificial Neural Network (ANN), a Recurrent Neural Network (RNN), a Long Short-Term Memory Network (LSTM), a Bidirectional Neural Network, a symmetrically connected neural network, a Deep Belief Network (DBN), a Convolutional Neural Network (CNN), a Generative Adversarial Network (GAN), an Autoencoder Neural Network (AE), a Restricted Boltzmann Machine (RBM), a Hopfield Network, a Self-Organizing Map (SOM), a Radial Basis Function Network (RBFN), a Spiking Neural Network (SNN), a Liquid State Machine (LSM), an Echo State Network (ESN), a Neural Turing Machine (NTM), or a Transformer Network, merely for example. 
- In addition to thetraining phase1508, a validation phase may be performed evaluated on a separate dataset known as the validation dataset. The validation dataset is used to tune the hyperparameters of a model, such as the learning rate and the regularization parameter. The hyperparameters are adjusted to improve the performance of the model on the validation dataset. 
- Once a model is fully trained and validated, in a testing phase, the model may be tested on a new dataset that the model has not seen before. The testing dataset is used to evaluate the performance of the model and to ensure that the model has not overfit the training data. 
- Inprediction phase1510, the trained machine-learning program1502 uses thefeatures1506 for analyzingquery data1528 to generate inferences, outcomes or predictions, as examples of a prediction/inference data1522 (e.g., objectestimate data612,range estimations610, pose estimations, or a risk estimation). For example, duringprediction phase1510, the trained machine-learning program1502 is used to generate an output.Query data1528 is provided as an input to the trained machine-learning program1502, and the trained machine-learning program1502 generates the prediction/inference data1522 as output, responsive to receipt of thequery data1528. 
- In some examples the trained machine-learning program1502 may be a generative AI model. Generative AI is a term that may refer to any type of artificial intelligence that can create new content fromtraining data1504. For example, generative AI can produce text, images, video, audio, code or synthetic data that are similar to the original data but not identical. 
- Some of the techniques that may be used in generative AI are: 
- Convolutional Neural Networks (CNNs): CNNs are commonly used for image recognition and computer vision tasks. They are designed to extract features from images by using filters or kernels that scan the input image and highlight important patterns. CNNs may be used in applications such as object detection, facial recognition, and autonomous driving.
- Recurrent Neural Networks (RNNs): RNNs are designed for processing sequential data, such as speech, text, and time series data. They have feedback loops that allow them to capture temporal dependencies and remember past inputs. RNNs may be used in applications such as speech recognition, machine translation, and sentiment analysis
- Generative adversarial networks (GANs): These are models that consist of two neural networks: a generator and a discriminator. The generator tries to create realistic content that can fool the discriminator, while the discriminator tries to distinguish between real and fake content. The two networks compete with each other and improve over time. GANs may be used in applications such as image synthesis, video prediction, and style transfer.
- Variational autoencoders (VAEs): These are models that encode input data into a latent space (a compressed representation) and then decode it back into output data. The latent space can be manipulated to generate new variations of the output data. They may use self-attention mechanisms to process input data, allowing them to handle long sequences of text and capture complex dependencies.
- Transformer models: These are models that use attention mechanisms to learn the relationships between different parts of input data (such as words or pixels) and generate output data based on these relationships. Transformer models can handle sequential data such as text or speech as well as non-sequential data such as images or code.
- In generative AI examples, the prediction/inference data1522 that is output include predictions, translations, summaries or media content. 
EXAMPLES- Example 1 is a method to operate a mobility safety system of a mobility platform, the method comprising: capturing real-time data relating to conditions within a traffic environment; using at least one trained model, generating a risk estimation related to a mobility platform within the traffic environment; and based on the risk estimation, generating a first alert directed at at least an operator of the mobility platform. 
- In Example 2, the subject matter of Example 1 includes, based on the risk estimation, generating a second alert directed at an operator of a further actor within the traffic environment. 
- In Example 3, the subject matter of Example 2 includes, the second alert directed at the operator of the further actor within the traffic environment is an audible alert having a loudness exceeding 100 decibels 
- In Example 4, the subject matter of Examples 1-3 includes, based on the risk estimation, selectively recording the real-time data related to the conditions within the traffic environment. 
- In Example 5, the subject matter of Examples 1-4 includes, based on the risk estimation, selectively performing computations with the mobility safety system. 
- In Example 6, the subject matter of Examples 1-5 includes, wherein capturing the real-time data relating to the conditions within the traffic environment comprises capturing the real-time data using at least one of a camera to capture image data, a radar sensor to capture radar data, a LiDAR sensor to capture lidar data, a proximity sensor to capture proximity data, an inertial measurement unit (IMU) to capture IMU data, and a global positioning system (GPS) to capture GPS data. 
- In Example 7, the subject matter of Examples 1-6 includes, wherein the generating of the risk estimation comprises: performing a localization and mapping operation using the real-time data to generate localization and mapping data; performing a perception operation using the real-time data and the localization and mapping data to generate object tracking and range estimation data; performing a prediction operation real-time data using the localization and mapping data and the object tracking and range estimation data to generate prediction data related to the traffic environment; and performing a risk estimation operation, to generate the risk estimation, using the real-time data, the localization and mapping data, the object tracking and range estimation data, and the prediction data. 
- In Example 8, the subject matter of Examples 1-7 includes, wherein the localization and mapping operation comprises: accessing the real-time data relating to the conditions within the traffic environment; performing an iterative position and orientation estimation for the mobility safety system using at least one of GPS data, IMU data, visual odometry data derived from image data, and a motion model; outputting ego-vehicle position and orientation data related to the mobility safety system. 
- In Example 9, the subject matter of Example 8 includes, wherein the perception operation comprises: accessing the real-time data relating to the conditions within the traffic environment; accessing the ego-vehicle position and orientation data; performing an object detection operation to identify objects within the traffic environment; performing an orientation estimation operation, responsive to identification of an object within the traffic environment, to estimate an orientation of the detected object; performing a ground surface estimation operation, using the real-time data and environmental map data, to estimate geometry of a ground surface within the traffic environment; performing a range estimation operation, using the estimated geometry of the ground surface and the real-time data, to estimate distances to identify the objects within the traffic environment; and outputting object estimate data based on the estimated orientation, the estimated geometry, and the estimated distances. 
- In Example 10, the subject matter of Example 9 includes, wherein the prediction operation comprises: accessing the real-time data relating to the conditions within the traffic environment; accessing the ego-vehicle position and orientation data; accessing the object estimate data; performing an object motion prediction to generate object production data; performing an interaction prediction to generate interaction production data; and outputting the prediction data including the object motion prediction data and the interaction prediction data. 
- In Example 11, the subject matter of Example 10 includes, wherein the risk estimation operation comprises: accessing the real-time data relating to conditions within the traffic environment; accessing the ego-vehicle position and orientation data; accessing the object estimate data; accessing the prediction data; performing a collision estimation operation to generate collision probability data reflecting a probability of a collision between the objects within the traffic environment; performing a collision severity operation to generate collision severity data reflecting a probable severity of the collision between the objects within the traffic environment; performing a predictive collision risk operation to generate predictive collision risk data; and outputting the predictive collision risk data, the predictive collision risk data comprising the risk estimation.. 
- In Example 12, the subject matter of Examples 7-11 includes, wherein each of the localization and mapping operation, the perception operation, the prediction operation, and the risk estimation operation are performed using a respective trained model. 
- In Example 13, the subject matter of Examples 7-12 includes, wherein two or more of the localization and mapping operation, the perception operation, the prediction operation, and the risk estimation operation are performed using a unified trained model. 
- In Example 14, the subject matter of Examples 1-13 includes, wherein the at least one trained model is trained using at least one of supervised, unsupervised and semi-supervised learning. 
- Example 15 is a computing apparatus comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, configure the apparatus to: capture real-time data relating to conditions within a traffic environment; using at least one trained model, generating a risk estimation related to a mobility platform within the traffic environment; and based on the risk estimation, generating a first alert directed at at least an operator of the mobility platform. 
- Example 16 is a non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by at least one computer, cause the at least one computer to: capture real-time data relating to conditions within a traffic environment; using at least one trained model, generating a risk estimation related to a mobility platform within the traffic environment; and based on the risk estimation, generating a first alert directed at at least an operator of the mobility platform. 
- Example 17 is a mobility safety system comprising: a real-time data capturing component to capture data relating to conditions within a traffic environment using at least one of a camera, a radar sensor, a LiDAR sensor, a proximity sensor, an inertial measurement unit (IMU), and a global positioning system (GPS); at least one trained model to generate a risk estimation related to a mobility platform within the traffic environment, the at least one trained model being trained using at least one of supervised, unsupervised and semi-supervised learning; a risk estimation component configured to generate the risk estimation using the real-time data, a localization and mapping component, a perception component, and a prediction component; an alert activation component configured to generate a first alert directed at an operator of the mobility platform based on the risk estimation, and a second alert directed at an operator of a further actor within the traffic environment; a real-time data recording component configured to selectively record the real-time data related to the conditions within the traffic environment based on the risk estimation; and a computation component configured to selectively perform computations with the mobility safety system based on the risk estimation. 
- In Example 18, the subject matter of Example 17 includes, wherein the localization and mapping component comprises: an iterative position and orientation estimation component for the mobility safety system using at least one of GPS data, IMU data, visual odometry data derived from image data, and a motion model; and an ego-vehicle position and orientation data output component related to the mobility safety system. 
- In Example 19, the subject matter of Example 18 includes, wherein the perception component comprises: an object detection component to identify objects within the traffic environment; an orientation estimation component, responsive to identification of an object within the traffic environment, to estimate an orientation of the detected object; a ground surface estimation component, using the real-time data and environmental map data, to estimate geometry of a ground surface within the traffic environment; a range estimation component, using the estimated geometry of the ground surface and the real-time data, to estimate distances to identify the objects within the traffic environment; and an object estimate data output component based on the estimated orientation, the estimated geometry, and the estimated distances. 
- In Example 20, the subject matter of Example 19 includes, wherein the prediction component comprises: an object motion prediction component to generate object production data; an interaction prediction component to generate interaction production data; and a prediction data output component including the object motion prediction data and the interaction prediction data. 
- In Example 21, the subject matter of Example 20 includes, wherein the risk estimation component comprises: a collision estimation component to generate collision probability data reflecting a probability of a collision between the objects within the traffic environment; a collision severity component to generate collision severity data reflecting a probable severity of the collision between the objects within the traffic environment; a predictive collision risk component to generate predictive collision risk data; and a predictive collision risk data output component comprising the risk estimation. 
- In Example 22, the subject matter of Examples 17-21 includes, wherein each of the localization and mapping component, the perception component, the prediction component, and the risk estimation component use a respective trained model. 
- In Example 23, the subject matter of Examples 17-22 includes, wherein two or more of the localization and mapping component, the perception component, the prediction component, and the risk estimation component use a unified trained model. 
- Example 24 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-23. 
- Example 25 is an apparatus comprising means to implement of any of Examples 1-23. 
- Example 26 is a system to implement of any of Examples 1-23. 
- Example 27 is a method to implement of any of Examples 1-23. 
GLOSSARY- “Carrier Signal” refers to any intangible medium capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. 
- “Communication Network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network, and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1xRTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth-generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. 
- “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner In examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC).A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. A decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component″ (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of methods described herein may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS).For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API).The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm).In some examples, the processors or processor-implemented components may be distributed across a number of geographic locations. 
- “Computer-Readable Medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. 
- “Machine-Storage Medium” refers to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions, routines and/or data. The term includes solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium”, “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” 
- “Micromobility platform” is an example of a mobility platform, and may include any small, lightweight, and lower-speed vehicle that is designed for short trips within urban areas. These vehicles are typically powered by human effort or electric motors with a power output below a certain threshold. Examples of micromobility platforms include: 
- 1. Bicycles (both traditional and electric)
- 2. Scooters (both kick and electric)
- 3. Skateboards (both traditional and electric)
- 4. Roller skates and inline skates
- 5. Segways and other self-balancing devices
- 6. Electric unicycles
- 7. Folding bikes and scooters
- 8. Tricycles (both traditional and electric)
- 9. Kickbikes and kick scooters
- 10. Wheelchairs and mobility scooters
- These vehicles have become increasingly popular in recent years as a means of urban transportation due to their affordability, convenience, and environmental friendliness. Many cities have implemented regulations or established dedicated infrastructure to support the safe use of micromobility platforms on roads, bike lanes, and sidewalks. 
- “Module” refers to logic having boundaries defined by function or subroutine calls, branch points, Application Program Interfaces (APIs), or other technologies that provide for the partitioning or modularization of particular processing or control functions. Modules are typically combined via their interfaces with other modules to carry out a machine process. A module may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium) or hardware modules. A “hardware module” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In some embodiments, a hardware module may be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. For example, a hardware module may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware module may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware modules become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware module″ (or “hardware-implemented module”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware modules) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods and routines described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented modules may be distributed across a number of geographic locations. 
- “Processor” refers to any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands”, “op codes”, “machine code”, etc.) and which produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC) or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. 
- “Signal Medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” may o include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.