Detailed Description
For the purpose of illustrating the present disclosure, exemplary embodiments of the present disclosure will be described in detail below with reference to the drawings, it being apparent that the described embodiments are only some, but not all embodiments of the present disclosure, and it is to be understood that the present disclosure is not limited by the exemplary embodiments.
It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Summary of the application
In recent years, intelligent driving has become an important trend in the development of the automobile industry. Intelligent driving can be further refined into assisted driving and automatic driving. The auxiliary driving is regarded as a primary stage of intelligent driving, and functions such as adaptive cruising, automatic parking and the like can be provided for a driver, so that the driving fatigue is reduced, and the driving safety is improved. Autopilot is considered to be an advanced stage of intelligent driving that is capable of autonomously sensing the surrounding environment, planning a travel route, and coping with various traffic conditions without a driver of the vehicle.
In order to improve the safety of intelligent driving, lane change identification technology is rapidly developed. However, the current lane change identification technology still faces some challenges, for example, the vehicle sensor data is frequently subject to noise and jitter interference, so that the abnormal data is more, and then the lane change intention identification result based on the vehicle sensor data is lower in accuracy. Moreover, the related art mostly relies on static thresholds to determine the lane change intention. However, in an actual driving environment, the vehicle behavior is changed suddenly, and this static threshold method is difficult to adapt to the changeable driving environment and the vehicle behavior. Therefore, accurately identifying the lane change intention of a vehicle remains a highly desirable problem.
Exemplary System
Fig. 1 is a schematic diagram of the composition of a lane change intention recognition system according to an exemplary embodiment of the present disclosure, and as shown in fig. 1, the lane change intention recognition system includes an acquisition device 101, a lane change intention recognition device 102, and a decision device 103.
The acquisition device 101 is used for acquiring travel data of a target object at a plurality of moments, and the acquisition device 101 may include one or more types of sensors. For example, the acquisition device 101 may include an image sensor (such as an in-vehicle camera), a radar, a lidar, and the like for detecting target object travel data. The vehicle-mounted camera is used for collecting images of a target object at a plurality of moments and can be arranged on the front windshield, the rear windshield, the rearview mirror or the periphery of the vehicle. The radar for acquiring the position, speed and traveling direction of the target object may be provided at a front bumper, a rear bumper of the vehicle or a side surface of the vehicle. Lidar is used to capture the position, shape and motion of a target object and may be generally positioned near the roof, front bumper, rear bumper, vehicle side or interior mirror. Of course, the acquired image data, radar data, laser radar data, and other multipath data may be fused to obtain the driving data of the target object. In addition, the number of the sensors can be one or more, taking a vehicle-mounted camera as an example, a monocular front view camera, a double current view camera and a scheme of combining a front view camera and a surrounding view camera.
The lane change intention recognition device 102 is configured to obtain running data of a target object at multiple times acquired by the acquisition device 101, then adjust yaw rates in the running data at each time to obtain adjusted yaw rates at each time, then determine yaw rate errors at each time based on the yaw rates at each time, and finally determine lane change intention recognition results of the target object based on the running data at each time, the adjusted yaw rates at each time and the yaw rate errors at each time, and send the lane change intention recognition results to the decision device 103.
The decision device 103 is configured to receive the lane change intention recognition result of the target object sent by the lane change intention recognition device 102, and output a driving strategy according to the lane change intention recognition result of the target object.
The lane change intention recognition system provided by the embodiment of the disclosure can be used for acquiring the driving data of the target object at a plurality of moments through the acquisition device, determining the adjusted yaw rate and yaw rate errors, predicting the lane change intention recognition result of the target object based on the driving data at each moment, the adjusted yaw rate and yaw rate errors, and sending the recognition result to the decision device so as to enable the decision device to output a driving strategy. The system can determine the lane change intention recognition result according to the running data at a plurality of moments which dynamically change, and the processing mode can better match different driving environments and vehicle behaviors, has higher adaptability and can ensure that the output lane change intention recognition result is more accurate.
Exemplary method
Fig. 2 is a flowchart illustrating a lane change intention recognition method according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, as shown in fig. 2, and the lane change intention recognition method includes the following steps:
In step 201, travel data of the target object at a plurality of times is determined.
The target object may be a dynamic object that moves relative to the host vehicle, such as other vehicles, other driving tools (e.g., bicycles, electric vehicles, etc.), pedestrians, and the like. The embodiments of the present disclosure are not limited to the type of the target object, and the following embodiments exemplify the target object as another vehicle.
The travel data may include a travel angle differenceRunning parameters such as yaw rate (ωi), vehicle speed (vi), acceleration (ai), and the like.
In some examples, determining the travel data of the target object at multiple times may include acquiring raw data of the target object at multiple times by using an acquisition device, and performing sensing processing on the raw data to obtain the travel data of the target object.
The acquisition device may include an image sensor (such as a vehicle-mounted camera), a radar, a lidar, and other sensors for detecting driving data of the target object. The raw data may include image data, radar data (radar data), lidar data, and the like, from which the position of the target object, the relative speed between the host vehicle and the target object, attitude data, and motion data, and the like, may be determined.
After the original data is obtained, the original data can be subjected to perception processing to obtain the running data of the target object. Specifically, the acquisition data is acquired by using the acquisition devices such as the vehicle-mounted camera, the radar or the laser radar, and is transmitted to the processing unit of the vehicle, and the processing unit can integrate and calibrate the acquisition data by using the sensing algorithm so as to output the position of the target object, the relative speed and posture data between the vehicle and the target object and the motion data. The driving angle difference and yaw rate of the target object can be calculated using the position and posture data of the vehicle, and the vehicle speed and acceleration of the target object can be calculated using the speed of the own vehicle, the acceleration of the own vehicle, the position of the target object, and the relative speed movement information between the own vehicle and the target object.
Illustratively, the onboard camera obtains image data by continuously capturing images of the surroundings of the host vehicle (e.g., capturing road markings, roadside structures, contours and features of other vehicles, etc.). Radar acquires radar data by transmitting electromagnetic waves and receiving reflected waves. When the electromagnetic wave encounters an object, the electromagnetic wave is reflected back, and the radar calculates the distance, speed and direction of the target object according to the time and frequency change of the reflected wave. The relative speed between the host vehicle and the target object vehicle is determined by continuously measuring the frequency variation of the reflected wave. Lidar uses a laser beam to scan the surrounding environment, which can provide very accurate distance measurements and object profile information. By rapidly rotating and emitting a laser beam, lidar data of the surrounding environment (e.g., a three-dimensional point cloud) may be generated. By analyzing the laser radar data, the position and the gesture of the target object and the relative position and the speed of the vehicle and the target object can be obtained.
For example, after acquisition of the acquisition data (including image data, radar data, and lidar data) using the acquisition device described above, the acquisition data may be transmitted to a processing unit of the vehicle. The processing unit may first analyze the image data using an image processing algorithm to determine the position of the host vehicle in the road. And integrating and calibrating the image data, the radar data and the laser radar data by using a fusion algorithm, so as to determine the position of the target object, the relative speed between the vehicle and the target object, the gesture data, the motion data and the like.
After the running data of the target object at a plurality of moments is obtained, the running data can be preprocessed to improve the data quality of the running data. In some examples, preprocessing the travel data may include filtering and denoising the travel data. The filtering process may include a median filtering process and an average filtering process, for example.
Because the median filtering can effectively eliminate impulse noise in the running data, the interference of individual outliers on the overall running data trend is avoided. The mean value filtering can further smooth the running data, reduce the influence of random noise, and enable the fluctuation of the running data to be more stable and regular. By combining the median filtering process and the mean filtering process, the interference of noise on the driving data can be reduced to the greatest extent, and an accurate data basis is provided for the follow-up determination of the lane change intention recognition result.
Step 202, adjusting the yaw rate in the running data of each moment based on the running data of a plurality of moments, and obtaining the adjusted yaw rate of each moment.
Yaw rate is the angular velocity of the vehicle rotating about a vertical axis and may reflect the rate of change of direction of the vehicle during travel. When the vehicle is in normal driving, the yaw rate of the vehicle is usually near zero or fluctuates in a small range. When the vehicle attempts to change lanes, the driver may operate the steering wheel to begin turning the vehicle, which may result in a significant change in yaw rate. Therefore, after the driving data of the target object is acquired, the yaw rate in the driving data can be further processed to accurately obtain the lane change intention of the target object based on the yaw rate.
Illustratively, the further processing of the yaw rate in the driving data may include adjusting the yaw rate corresponding to each of the plurality of time instants to obtain an adjusted yaw rate for each time instant, and then determining the lane change intention of the target object using the adjusted yaw rate.
Step 203, determining yaw rate errors at each moment based on the yaw rates at each moment.
Because the acquisition device inevitably receives noise and jitter interference when acquiring the running data of the target object, in order to further remove abnormal data generated due to the noise and the jitter, the yaw rate error of each moment can be determined based on the yaw rate of each moment, so that the abnormal data can be filtered and reduced, and the signal to noise ratio of the data can be improved.
Step 204, determining a lane change intention recognition result of the target object based on the running data of each time, the adjustment yaw rate of each time and the yaw rate error of each time.
After determining the yaw rate adjustment at each time according to step 202, and determining the yaw rate error at each time according to step 203, the lane change intention recognition result of the target object may be obtained according to the yaw rate, the yaw rate adjustment, the yaw rate error, and other information in the travel data at each time.
The embodiment of the disclosure provides a lane change intention recognition method, which comprises the steps of firstly acquiring running data (such as angle difference, yaw rate, vehicle speed, acceleration and the like) of a target object at a plurality of moments in real time, then obtaining an adjusted yaw rate and yaw rate error based on the yaw rate in the running data, and finally obtaining a lane change intention recognition result of the target object according to the running data, the adjusted yaw rate and the yaw rate error at each moment. The yaw rate is further processed, so that the method can be better adapted to different driving environments and vehicle changes, the identification accuracy is improved, and the method has higher flexibility.
As shown in fig. 3, the step 202 "adjusting the yaw rate in the running data at each time based on the running data at a plurality of times to obtain the adjusted yaw rate at each time" may include the following steps 2021 to 2022:
step 2021, determining a dynamic adaptive factor based on the yaw rate, speed, and acceleration in the travel data at the plurality of times.
In some examples, the dynamic adaptation factor F (t) satisfies the following expression:
wherein α, β, and γ are weight coefficients. Omegai is the yaw rate corresponding to the ith frame, ai is the acceleration corresponding to the ith frame, and vi is the vehicle speed corresponding to the ith frame.
By way of example, the travel data for the plurality of moments may include 25 frames of travel data. And the dynamic self-adaptive factor corresponding to each frame of running data can be obtained by using the yaw rate, the acceleration, the vehicle speed and the dynamic self-adaptive factor expression in each frame of running data. For the 25 frames of running data, the dynamic adaptive factor corresponding to each frame of running data in the 25 frames of running data is the same.
From the expression of the dynamic adaptive factor F (t) described above, it is known that the dynamic adaptive factor is determined from real-time running data (i.e., yaw rate, acceleration, vehicle speed) of the target object. The yaw rate is adjusted by utilizing the dynamic self-adaptive factor, which is equivalent to adjusting the yaw rate of the target object according to the driving state of the target object, so that the adjusted yaw rate can be more in line with the current driving environment and driving behavior of the target object, thereby being beneficial to obtaining a more accurate lane change intention recognition result by utilizing the adjusted yaw rate later.
Step 2022, adjusting the yaw rate at each moment based on the dynamic adaptive factor, to obtain an adjusted yaw rate at each moment.
After the dynamic self-adaptive factor of each moment is obtained, the yaw rate of each moment can be determined by using the dynamic self-adaptive factor of each moment and the yaw rate of each moment.
Illustratively, the process of determining the adjusted yaw rate at each time may be to multiply the dynamic adaptive factor at each time and the yaw rate at each time to obtain the adjusted yaw rate at each time, using the dynamic adaptive factor at each time and the yaw rate at each time.
Wherein the adjustment yaw rate ωadjusted(t) at each time satisfies the following expression:
ωadjusted(t)=F(t)×ωi
It can be appreciated that the present disclosure is exemplary of determining an adjusted yaw rate using dynamic adaptive factors and yaw rates, and may also employ a machine learning algorithm to determine an adjustment coefficient, and adjust the yaw rate using the adjustment coefficient, which is not limited by the present disclosure and meets practical requirements.
Since the driving environment and vehicle behavior are dynamically changing, embodiments of the present disclosure introduce a dynamic adaptive factor that adjusts the yaw rate as the target object's travel data changes dynamically to arrive at an adjusted yaw rate. The yaw rate is adjusted to be more in line with the real driving data, and then the lane change intention recognition result determined by using the yaw rate is also more accurate.
As shown in fig. 4, the step 203 "determining a yaw rate error at each time based on the yaw rate at each time" may include the following steps 2031 to 2032 on the basis of the embodiment shown in fig. 2:
in step 2031, fitting processing is performed on yaw rates at a plurality of times, and a predicted yaw rate at each time is determined.
In some examples, the fitting process is performed on the yaw rates at multiple times to filter and reduce data noise, thereby effectively identifying and rejecting jitter data and improving the data signal-to-noise ratio.
Step 2032, determining yaw rate errors for each time based on the yaw rates for each time and the predicted yaw rates for each time.
Fitting the yaw rate at each moment to obtain a predicted yaw rate at each moment, and comparing the yaw rate at each moment with the predicted yaw rate to obtain a yaw rate error at each moment.
According to the embodiment of the disclosure, the yaw rate is subjected to fitting processing, so that jitter data is eliminated, the data signal-to-noise ratio is improved, the lane change intention of the target object is predicted more accurately later, and misjudgment and missed judgment are reduced.
As shown in fig. 5, on the basis of the embodiment shown in fig. 4, the foregoing step 2031 "fitting the yaw rates at a plurality of times, and determining the predicted yaw rate at each time" may include the following steps 20311 to 20312:
In step 20311, fitting parameters of a yaw rate fitting function are determined based on the yaw rates at each time instant.
In some examples, the yaw rate fitting function is a function of yaw rate over time. The yaw rate fitting function is used to determine a predicted yaw rate for each moment.
Step 20312, determining a predicted yaw rate for each moment based on the yaw rate fitting function.
After the yaw rate at each moment is obtained, the predicted yaw rate at each moment may be determined in a number of ways. For example, a sliding window filter and polynomial fit may be used to determine the predicted yaw rate at each moment. Of course, other ways of determining yaw rate errors at each time instant may be used, as this disclosure is not limited in this regard.
In some examples, the determination of the predicted yaw rate for each time instant using sliding window filtering and polynomial fitting may be performed by first setting a yaw rate fitting function, then determining fitting parameters for the yaw rate fitting function based on the sliding window, and finally determining the predicted yaw rate for each time instant based on the yaw rate fitting function and the fitting parameters in the yaw rate fitting function.
The yaw rate fitting function may be set as a polynomial function, depending on the current driving environment and vehicle behavior, although other types of functions are also possible, which the present disclosure is not limited to.
Illustratively, the yaw rate fitting function ωfit (t) satisfies the following expression:
ωfit(t)=c0+c1t+c2t2+citi+…+cntn
Wherein c0、c1、c2、ci and cn are fitting parameters.
After the yaw rate fitting function is set, fitting parameters of the yaw rate fitting function can be determined based on the running data at a plurality of moments.
For example, determining fitting parameters of the yaw rate fitting function based on the travel data at the plurality of time instants may include setting sliding parameters of the sliding window, dividing the travel data at the plurality of time instants into at least one sliding window according to the sliding parameters of the sliding window, and determining fitting parameters of the yaw rate fitting function based on the travel data in each sliding window.
Wherein the sliding parameters of the sliding window may include the size of the sliding window and the step size of the sliding window. For example, the sliding windows are sized to be M, i.e., each sliding window includes M data points, with the data points within one sliding window being (t1,ω1),(t2,ω2),…,(tM,ωM). E.g., m=5. The step size of the sliding window is N, e.g., n=1.
For example, in connection with step 2021, the travel data for the plurality of time instants may comprise 25 frames of travel data. When the size m=5 of the sliding window and the step size n=1 of the sliding window, then 25 frames of traveling data may be divided into 21 sliding windows, each sliding window including 5 frames of traveling data.
Wherein the data points in the 1 st sliding window are (t1,ω1),(t2,ω2),…,(t5,ω5), the data points in the 2 nd sliding window are (t2,ω2),(t3,ω3),…,(t6,ω6), the data points in the 20 th sliding window are (t20,ω20),(t21,ω21),…,(t24,ω24), and the data points in the 21 st sliding window are (t21,ω21),(t22,ω22),…,(t25,ω25).
Since the step size n=1 of the sliding window, which is smaller than the size of the sliding window, the same data points exist in different sliding windows. For example, (t2,ω2) occurs simultaneously in the first sliding window and the second sliding window. (t3,ω3) simultaneously appear in the first sliding window, the second sliding window, the third sliding window, and so on.
The process of determining the fitting parameters of the yaw rate fitting function based on the travel data within each sliding window may include performing a polynomial fit to the travel data within each sliding window to obtain the fitting parameters of the yaw rate fitting function. It will be appreciated that since the data points within each sliding window are different, the fitting parameters of the determined yaw rate fitting function are different on a per sliding window basis.
Illustratively, after determining the data points within each sliding window, the data points within each sliding window may be substituted into the yaw rate fitting function using a least squares method to determine the fitting parameters (ci) of the yaw rate fitting function. And substituting the fitting parameters into a yaw rate fitting function, thereby obtaining the predicted yaw rate at each moment.
After obtaining the data points in the 21 sliding windows, a set of fitting parameters corresponding to the yaw rate fitting function omegafit (t) can be determined by using the data points in one sliding window and a least square method. For the 21 sliding windows described above, 21 sets of fitting parameters can be determined. The fitting parameters corresponding to the data points within each sliding window are consistent. For example, for data points within the 1 st sliding window, (t1,ω1),(t2,ω2),…,(t5,ω5) the corresponding fitting parameters are consistent.
Since the same data points exist in different sliding windows, one data point may correspond to multiple sets of fitting parameters, e.g., (t2,ω2) is included in both the 1 st and 2 nd sliding windows. A corresponding set of fitting parameters may be determined (t2,ω2) based on the 1 st sliding window, and another corresponding set of fitting parameters may be determined (t2,ω2) based on the 2 nd sliding window.
In this case, the sets of fitting parameters corresponding to the data points may be averaged to obtain the processed fitting parameters corresponding to the data points. That is, a set of fitting parameters corresponding to the data point (t2,ω2) within the 1 st sliding window and another set of fitting parameters corresponding to the data point (t2,ω2) within the 2 nd sliding window are averaged to obtain a processed fitting parameter (t2,ω2). And then, according to the processed fitting parameters and the yaw rate fitting function corresponding to the (t2,ω2), obtaining the predicted yaw rate corresponding to the data point. Other data points may also be referred to in this manner to derive processed fitting parameters and predicted yaw rate, and will not be described in detail herein.
The disclosed embodiments can filter and reduce data noise by polynomial fitting of data points within a sliding window, after which fitting errors can be calculated using the predicted yaw rate. This approach is more efficient and captures trends and patterns of data more accurately than those that use simple filtering or moving average processing of the data.
As shown in fig. 6, the step 2032 "determining a yaw rate error at each time based on the adjusted yaw rate at each time and the predicted yaw rate at each time" may include the following steps 20321 to 20322 on the basis of the embodiment shown in fig. 5:
step 20321, determining a difference between the yaw rate at each time and the predicted yaw rate corresponding to each time.
After determining the yaw rate at each time and the predicted yaw rate corresponding to each time, the yaw rate at each time and the predicted yaw rate corresponding to each time may be differenced to obtain a difference between the yaw rate at each time and the predicted yaw rate corresponding to each time.
Step 20322, determining yaw rate errors at each time based on the difference corresponding to each time.
After determining the difference value corresponding to each moment, the yaw rate error at each moment can be determined based on the difference value.
The process of determining the yaw rate error at each moment based on the difference may be, for example, determining a square of the difference between the data points in each sliding window based on the yaw rate corresponding to each data point and the difference between the predicted yaw rates, and summing the squares of the differences between the data points in each sliding window to obtain the yaw rate error for each sliding window. Wherein yaw rate errors corresponding to data points within each sliding window are the same.
Yaw rate error Efit (t) satisfies the following expression:
Where M is the size of the sliding window, ωi is the yaw rate of the ith frame, ωfit (t) is the predicted yaw rate of the ith frame.
As can be seen from the above expression of yaw rate error, yaw rate error is calculated in a sliding window. As can be seen from the foregoing step 20312, the same data points exist in different sliding windows. Then multiple yaw rate errors may be corresponded when the data points are within different sliding windows. For this case, the plurality of yaw rate errors for a data point may be averaged to obtain a processed yaw rate error for the data point. For example, one yaw rate error corresponding to the data point (t2,ω2) within the 1 st sliding window and another yaw rate error corresponding to the data point (t2,ω2) within the 2 nd sliding window are averaged to obtain a processed yaw rate error corresponding to (t2,ω2). Other data points may also be referenced to derive a processed yaw rate error in this manner, and are not described in detail herein.
It will be appreciated that after the yaw rate at each time and the predicted yaw rate corresponding to each time are obtained, the yaw rate error at each time may also be determined according to the yaw rate at each time and the mean square error of the predicted yaw rate corresponding to each time, which is not limited in this disclosure.
According to the embodiment of the disclosure, after the yaw rate and the predicted yaw rate of each moment are obtained, the yaw rate error of each moment can be further obtained, abnormal data in the driving data can be eliminated based on the yaw rate error, and the accuracy of identification is improved.
As shown in fig. 7, the step 204 "determining the lane change intention recognition result of the target object based on the running data at each time, the adjustment yaw rate at each time, and the yaw rate error at each time" may include the following steps 2041 to 2042:
Step 2041, determining a first lane change state based on the travel angle difference in the travel data at each time, the adjustment yaw rate at each time, and the yaw rate error at each time.
In the present disclosure, determining whether the target object changes lanes may be determined from two aspects, where the first aspect is determining through a current dynamic driving track of the target object. The second aspect is dynamic travel locus judgment by the target object history. When both the first aspect and the second aspect determine that the target object has a lane change trend, then the target object is considered to have a lane change intention.
Wherein the first aspect may be determined by a first lane change status and the second aspect may be determined by a second lane change status.
Since the current dynamic running track of the target object is related to the yaw rate and the running angle difference of each moment, after the yaw rate adjustment and the yaw rate error of each moment are obtained, the first lane change state can be determined according to the yaw rate adjustment and the yaw rate error of each moment and the running angle difference of the target object. The travel angle difference is an angle of the travel direction of the target object with respect to the coordinate axis direction of the own vehicle coordinate system. The travel angle difference can be obtained by a difference between the angle of the travel direction of the target object with respect to the coordinate axis direction of the world coordinate system and the angle of the travel direction of the own vehicle with respect to the coordinate axis direction of the world coordinate system. Illustratively, the coordinate axis direction of the vehicle coordinate system is the X-axis direction, and the coordinate axis direction of the world coordinate system is also the X-axis direction.
The vehicle coordinate system uses the vehicle center point of the vehicle as an origin (or the center point of the rear axle of the vehicle as the origin), the X axis of the vehicle coordinate system is parallel to the running direction of the vehicle, the Y axis is perpendicular to the running direction of the vehicle, and the Z axis is perpendicular to the ground. The positive X-axis direction is the front of the vehicle, the positive Y-axis direction is the right side of the vehicle, and the positive Z-axis direction is the upper side of the vehicle.
The world coordinate system is a global coordinate system fixed to the ground for describing the position of the vehicle in the environment. Wherein, the X axis of the world coordinate system is parallel to the advancing direction of the road, the Y axis is perpendicular to the advancing direction of the road, and the Z axis is perpendicular to the ground.
Step 2042 determines a second lane change state based on the travel data at each time and the adjusted yaw rate at each time.
The historical dynamic running track of the target object is related to the yaw rate and the running data of each moment, so that after the yaw rate adjustment of each moment is obtained, the second lane change state can be determined according to the yaw rate adjustment of each moment and the running data of each moment.
Step 2043, determining a lane change intention recognition result of the target object based on the first lane change state and the second lane change state.
And under the condition that the first lane change state has a lane change trend and the second lane change state also has a lane change trend, the lane change intention recognition result of the target object is that the target object has a lane change trend, and then the driving strategy of the vehicle can be output according to the lane change intention recognition result.
In the embodiment provided by the disclosure, whether the target object has the lane change intention is judged by the data in two aspects (namely, the current dynamic behavior track and the historical dynamic behavior track of the target object), and compared with the mode of utilizing the static threshold value in the related art, the mode is more accurate and accords with the driving habit of the target object.
As shown in fig. 8, the step 2041 "of determining the first lane change state based on the travel angle difference in the travel data at each time, the yaw rate adjustment at each time, and the yaw rate error at each time" based on the embodiment shown in fig. 7 may include the following steps 20411 to 20413:
step 20411, determining a dynamic lane change angle threshold and a crossing angle threshold for each time based on the adjusted yaw rate for each time and the yaw rate error for each time.
After the yaw rate adjustment and the yaw rate error of each moment are obtained, the dynamic lane change angle threshold and the crossing angle threshold of each moment can be determined based on the yaw rate adjustment and the yaw rate error of each moment.
The lane change angle refers to the angle difference between the actual driving direction of the target object and the original lane direction in the lane change process. The crossing angle refers to an angular difference between a vehicle traveling direction and a target lane direction in a process that the vehicle crosses from an original lane to the target lane.
The lane change angle directly relates to the collision risk of the own vehicle and the target object, and the possibility of collision between the target object and the own vehicle is increased due to the fact that the lane change angle of the target object is too large, so that lane change of the own vehicle is prevented. The proper ride-through angle helps to maintain stability of the vehicle during lane changes. However, when the crossing angle is too large, the center of gravity of the vehicle may be severely shifted, affecting the handling and stability of the target object.
Exemplary, dynamic lane change angle thresholdThe following expression is satisfied:
Crossing angle thresholdThe following expression is satisfied:
Where k1、k2、k3、k4 is a preset threshold, ωadjusted (t) is the yaw rate adjustment, and Efit (t) is the yaw rate error.
When the target object is always in a driving state, the collected driving data at each moment also always changes. Therefore, the set dynamic lane change angle threshold and crossing angle threshold are also dynamically changed over time.
Step 20412, determining the number of valid time points among the plurality of time points based on the travel angle difference at each time point, the dynamic lane change angle threshold at each time point, and the crossing angle threshold.
After the running data of a plurality of moments are collected, the effective moment can be determined from the moments based on the running angle difference of each moment, the dynamic lane change angle threshold value and the crossing angle threshold value of each moment, so that the first lane change state can be further obtained according to the data of the effective moment.
In general, when the travel angle difference is small, it is indicated that the travel direction of the vehicle is not greatly changed, and lane change is unlikely. When the travel angle difference is large, it is indicated that the vehicle is turning to a large extent, and there is a large possibility of lane change. Therefore, the lane change intention of the target object can be accurately obtained by utilizing the driving angle difference, the dynamic lane change angle threshold and the crossing angle threshold.
In response to the number of active moments being greater than the number threshold, a determination is made that the first lane change state is a lane change trend, step 20413.
In some examples, when the number of valid times (nvalid) is determined, nvalid may be compared to a number thresholdAnd comparing to determine the first lane change state.
In particular, in case the number of active moments is greater than a number threshold, i.e.The first lane change status is considered to have a lane change tendency. In the case where the number of active moments is less than or equal to the number threshold, i.e.The first lane change status is deemed to have no lane change tendency.
The method and the device are not limited in determining the sequence of the first lane change state and the second lane change state, the first lane change state can be determined firstly, then the second lane change state can be determined, and the second lane change state can be determined firstly, then the first lane change state can be determined, or the first lane change state and the second lane change state can be determined simultaneously. In step 2043, it is known that the lane change intention recognition result of the target object is a lane change trend only if the first lane change state is a lane change trend and the second lane change state is a lane change trend, so that when the first lane change state or the second lane change state is a lane change trend, the result of the other lane change state does not need to be checked, and the lane change intention recognition result of the target object can be directly determined to be a lane change trend. When the first lane change state or the second lane change state is the lane change trend, the result of the other lane change state needs to be continuously checked so as to finally output the lane change intention recognition result of the target object.
In combination with the foregoing, it can be known that the lane change angle and the crossing angle have an important influence on whether the lane change can be safely performed, so that the embodiment of the application sets the dynamic lane change angle threshold and the crossing angle threshold based on the adjustment of the yaw rate and the yaw rate error, determines the effective time by using the set dynamic lane change angle threshold and the crossing angle threshold, and finally can accurately obtain the lane change intention of the target object according to the number of the effective times.
As shown in fig. 9, based on the embodiment shown in fig. 8, the step 20412 "determining the number of valid time points in the plurality of time points based on the travel angle difference at each time point, the dynamic lane change angle threshold value and the crossing angle threshold value" may include the following steps 204121 to 204123:
Step 204121, determining a first magnitude relation between the running angle difference and the dynamic lane change angle threshold at each moment and a second magnitude relation between the running angle difference and the crossing angle threshold at each moment.
After the dynamic lane change angle threshold and the crossing angle threshold of each moment are obtained, the running angle difference of each moment can be compared with the dynamic lane change angle threshold to obtain a first magnitude relation between the running angle difference of each moment and the dynamic lane change angle threshold, and the running angle difference of each moment is compared with the crossing angle threshold to obtain a second magnitude relation between the running angle difference of each moment and the crossing angle threshold. The effective time may then be determined based on the first size relationship and the second size relationship.
In some examples, the first magnitude relationship between the travel angle difference at each time instant and the dynamic lane change angle threshold includes the travel angle difference at each time instant being greater than the dynamic lane change angle threshold and the travel angle difference at each time instant being less than or equal to the dynamic lane change angle threshold.
The second magnitude relation between the travel angle difference and the crossing angle threshold value at each time includes that the travel angle difference at each time is greater than or equal to the crossing angle threshold value and the travel angle difference at each time is less than the crossing angle threshold value.
And step 204122, determining the moment as the effective moment in response to the fact that the first magnitude relation corresponding to each moment is that the running angle difference is larger than the dynamic lane change angle threshold and the second magnitude relation is that the running angle difference is smaller than the crossing angle threshold.
In some examples, the valid time is a time at which a preset condition is met. The preset condition is that the driving angle difference is larger than the dynamic lane change angle threshold value, and the driving angle difference is smaller than the crossing angle threshold value. That is, the travel angle difference at the effective timeThe following expression is satisfied:
And combining the step 204121 to determine a first magnitude relation between the running angle difference at each moment and the dynamic lane change angle threshold and a second magnitude relation between the running angle difference at each moment and the crossing angle threshold. And combining the preset conditions, the first size relation and the second size relation to determine the effective time in the multiple times.
Step 204123, determining a number of active moments in the plurality of moments.
In combination with step 204121 and step 204122, the valid time may be determined from a plurality of times, and then the valid times may be counted to determine the number of valid times.
In combination with step 20411, it is known that the lane change angle and the crossing angle can reflect whether the lane change can be completed safely to a certain extent, so that the embodiment of the application further determines the effective time with the lane change trend by using the dynamic lane change angle threshold and the crossing angle threshold, and then can accurately output the first lane change state according to the number of the effective times.
As shown in fig. 10, the step 2042 "of determining the second lane change state based on the running data at each time and the adjusted yaw rate at each time" may include the following steps 20421 to 20422:
Step 20421, determining an average yaw rate corresponding to the preset time window based on the adjusted yaw rates at each time.
In some examples, after determining the adjusted yaw rate for each of the plurality of moments, a preset time window may be set, through yaw rate data within which the lane change intent of the target object is further determined.
Specifically, the lane change intention of the target object may be determined by the average yaw rate in the preset time window, and of course, other yaw rate data in the preset time window may also be used for determination, which is not limited in the present disclosure.
Illustratively, the average yaw rate ωmean (t) satisfies the following expression:
For example, if the length of the preset time window is T, t=15, that is, 15 frames of running data are included in one preset time window, the average yaw rate may be derived based on the adjusted yaw rate of each frame of the 15 frames of running data.
In general, if the average yaw rate is low and relatively stable, it means that the target object remains in a relatively straight running state for a long period of time, and the lane will not be changed with a high probability. Because the change in direction of the vehicle is small when traveling straight, the yaw rate is maintained at a low level. When the average yaw rate starts to increase and exceeds a certain threshold, it is indicated that the target object is performing a steering operation, and there is a possibility of lane change. If the average yaw rate continues to be high, this indicates that the direction of travel of the target object has changed more significantly and continuously. Thus, a more accurate second lane change state may be output using the average yaw rate.
Step 20422, determining a second lane change state based on the average yaw rate and a difference in travel angle in the last time travel data within the preset time window.
After determining the average yaw rate, the second lane-change status may be determined based on the average yaw rate and a travel angle difference in the travel data at a last time within a preset time window. The second lane change state includes that the target object has a lane change trend and the target object does not have a lane change trend.
In the embodiment of the application, the second lane change state is judged by adopting the average yaw rate in the preset time window and the running angle difference at the last moment in the preset time window. Since the average yaw rate is derived from the yaw rate over a period of time, this way of evaluation is more consistent with the real-time driving state of the target object. This approach is more consistent with the actual driving behavior and is also more accurate than the static threshold evaluation method.
As shown in fig. 11, the step 20422 "determining the second lane change state based on the average yaw rate and the travel angle difference in the travel data at the last time within the preset time window" may include the following steps 204221 to 204222, based on the embodiment shown in fig. 10:
Step 204221, determining a third magnitude relation between the average yaw rate and the yaw rate threshold, and a fourth magnitude relation between the running angle difference in the running data at the last moment in the preset time window and the angle difference threshold.
In some examples, the yaw rate threshold may be presetThreshold value of angle difference
And comparing the running angle difference in the running data at the last moment in the preset time window with an angle difference threshold value to obtain a fourth magnitude relation between the running angle difference at the last moment and the angle difference threshold value. The second lane change status may then be determined based on the third size relationship and the fourth size relationship.
In some examples, the third magnitude relationship between the average yaw rate and the yaw rate threshold includes the average yaw rate being greater than the yaw rate threshold, i.eAnd the average yaw rate is less than or equal to the yaw rate threshold, i.e
The fourth magnitude relation between the travel angle difference at the last moment and the angle difference threshold value comprises that the travel angle difference at the last moment is larger than the angle difference threshold value, namelyAnd the driving angle difference at the last moment is less than or equal to an angle difference threshold value, namely
And step 204222, determining that the second lane change state has a lane change trend in response to the third magnitude relation being that the average yaw rate is greater than the yaw rate threshold and the fourth magnitude relation being that the running angle difference in the running data at the last moment in the preset time window is greater than the angle difference threshold.
And in combination with step 204221, determining a third magnitude relation between the average yaw rate and the yaw rate threshold value and a fourth magnitude relation between the running angle difference in the running data at the last moment in the preset time window and the angle difference threshold value. And when the third magnitude relation is that the average yaw rate is larger than the yaw rate threshold value and the fourth magnitude relation is that the running angle difference in the running data at the last moment in the preset time window is larger than the angle difference threshold value, determining that the second lane change state has the lane change trend.
In combination with step 20421, the average yaw rate may reflect the lane change intention of the target object to a certain extent, so that after the average yaw rate is obtained, the embodiment of the present application compares the average yaw rate with the yaw rate threshold, and compares the driving angle difference with the angle difference threshold, thereby obtaining an accurate second lane change state.
Exemplary apparatus
Based on the foregoing embodiments, the embodiments of the present disclosure provide a lane change intention recognition device, where each module included in the device and each unit included in each module may be implemented by a processor in a computer device, or may of course also be implemented by a specific logic circuit, and in the implementation process, the processor may be a central Processing unit (CPU, central Processing Unit), a microprocessor (MPU, microprocessor Unit), a digital signal processor (DSP, digital Signal Processing), or a field programmable gate array (FPGA, field Programmable GATE ARRAY), etc.
Fig. 12 is a schematic diagram of the composition structure of a lane change intention recognition apparatus according to an exemplary embodiment of the present disclosure, and as shown in fig. 12, a lane change intention recognition apparatus 1200 includes:
A first determining module 1201 is configured to determine travel data of the target object at a plurality of moments.
A second determining module 1202, configured to adjust yaw rates in the running data at each time based on the running data at a plurality of times, and obtain adjusted yaw rates at each time.
A third determining module 1203 is configured to determine yaw rate errors at each moment based on the yaw rates at each moment.
A fourth determining module 1204, configured to determine a lane change intention recognition result of the target object based on the travel data at each time, the adjustment yaw rate at each time, and the yaw rate error at each time.
In some embodiments, the second determining module 1202 is further configured to determine a dynamic adaptive factor based on yaw rate, speed, and acceleration in the driving data at a plurality of times, and adjust yaw rates at each time based on the dynamic adaptive factor to obtain adjusted yaw rates at each time.
In some embodiments, the third determining module 1203 is further configured to perform a fitting process on the yaw rates at the plurality of times, determine a predicted yaw rate at each time, and determine a yaw rate error at each time based on the yaw rate at each time and the predicted yaw rate at each time.
In some embodiments, the third determining module 1203 is further configured to determine fitting parameters of a yaw rate fitting function based on the yaw rate at each time instant, and determine a predicted yaw rate at each time instant based on the yaw rate fitting function.
In some embodiments, the third determination module 1203 is further configured to determine a difference between the yaw rate at each time and the predicted yaw rate corresponding to each time, and determine a yaw rate error at each time based on the difference corresponding to each time.
In some embodiments, the fourth determining module 1204 is further configured to determine a first lane change state based on the travel angle difference in the travel data at each time, the adjusted yaw rate at each time, and the yaw rate error at each time, determine a second lane change state based on the travel data at each time and the adjusted yaw rate at each time, and determine a lane change intention recognition result of the target object based on the first lane change state and the second lane change state.
In some embodiments, the fourth determining module 1204 is further configured to determine a dynamic lane-change angle threshold and a crossing angle threshold for each time based on the adjusted yaw rate for each time and the yaw rate error for each time, determine a number of valid times among the plurality of times based on the travel angle difference for each time, the dynamic lane-change angle threshold and the crossing angle threshold for each time, and determine that the first lane-change state has a lane-change trend in response to the number of valid times being greater than the number threshold.
In some embodiments, the fourth determining module 1204 is further configured to determine a first magnitude relation between the travel angle difference and the dynamic lane-change angle threshold at each time, and a second magnitude relation between the travel angle difference and the crossing angle threshold at each time, determine that a time is an effective time in response to the first magnitude relation corresponding to each time being a travel angle difference greater than the dynamic lane-change angle threshold and the second magnitude relation being a travel angle difference less than the crossing angle threshold, and determine a number of effective times in the plurality of times.
In some embodiments, the fourth determining module 1204 is further configured to determine an average yaw rate corresponding to the preset time window based on the adjusted yaw rates at each time, and determine the second lane change state based on the average yaw rate and a travel angle difference in the travel data at a last time within the preset time window.
In some embodiments, the fourth determining module 1204 is further configured to determine a third magnitude relation between the average yaw rate and the yaw rate threshold, and a fourth magnitude relation between the angle difference of travel and the angle difference threshold in the travel data at the last time within the preset time window, and determine that the second lane change state has a lane change trend in response to the third magnitude relation being that the average yaw rate is greater than the yaw rate threshold and the fourth magnitude relation being that the angle difference of travel in the travel data at the last time within the preset time window is greater than the angle difference threshold.
It should be noted that the description of the exemplary embodiment items of the apparatus above, which are similar to the description of the method above, have the same advantageous effects as the corresponding exemplary embodiments of the method. For technical details not disclosed in the exemplary embodiments of the apparatus of the present disclosure and corresponding advantageous technical effects, those skilled in the art will understand with reference to the description of the exemplary embodiments of the method of the present disclosure, and are not described herein again.
Exemplary electronic device
Fig. 13 is a block diagram illustrating components of an electronic device according to an exemplary embodiment of the present disclosure, and as shown in fig. 13, an electronic device 1300 includes at least one processor 1301 and a memory 1302.
Processor 1301 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in electronic device 1300 to perform desired functions.
Memory 1302 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. On a computer-readable storage medium, one or more computer program instructions may be stored that may be executed by the processor 1301 to implement the traffic sign pose detection methods and/or other desired functions of the various embodiments of the present disclosure above.
In one example, the electronic device 1300 may also include an input device 1303 and an output device 1304, which are interconnected via a bus system and/or other form of connection mechanism (not shown).
The input device 1303 may also include, for example, a keyboard, a mouse, and the like.
The output device 1304 may output various information to the outside, which may include, for example, a display, a speaker, a printer, and a communication network and a remote output apparatus connected thereto, and the like.
Of course, only some of the components of the electronic device 1300 that are relevant to the present disclosure are shown in fig. 13 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 1300 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also provide a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of detecting traffic sign poses of the embodiments of the present disclosure described in the "exemplary methods" section above.
The computer program product may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the method of detecting a traffic sign pose of the embodiments of the present disclosure described in the "exemplary methods" section above.
A computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example but not limited to, a system, apparatus, or device including electronic, magnetic, optical, electromagnetic, infrared, or semiconductor, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of a readable storage medium include an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, but the advantages, benefits, effects, etc. mentioned in this disclosure are merely examples and are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
Various modifications and alterations to this disclosure may be made by those skilled in the art without departing from the spirit and scope of the application. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.