Movatterモバイル変換


[0]ホーム

URL:


CN120363931B - Method and device for presenting assisted driving decision-making process - Google Patents

Method and device for presenting assisted driving decision-making process

Info

Publication number
CN120363931B
CN120363931BCN202510855324.1ACN202510855324ACN120363931BCN 120363931 BCN120363931 BCN 120363931BCN 202510855324 ACN202510855324 ACN 202510855324ACN 120363931 BCN120363931 BCN 120363931B
Authority
CN
China
Prior art keywords
vehicle
decision
information
related information
auxiliary driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510855324.1A
Other languages
Chinese (zh)
Other versions
CN120363931A (en
Inventor
张砚超
魏鹏飞
李欣静
段晨宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Geely Automobile Research Institute Ningbo Co LtdfiledCriticalZhejiang Geely Holding Group Co Ltd
Priority to CN202510855324.1ApriorityCriticalpatent/CN120363931B/en
Publication of CN120363931ApublicationCriticalpatent/CN120363931A/en
Application grantedgrantedCritical
Publication of CN120363931BpublicationCriticalpatent/CN120363931B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention provides a presentation method and a presentation device for assisting a driving decision process. The method comprises the steps of obtaining decision-related information corresponding to an auxiliary driving module of a vehicle under the condition that the auxiliary driving function of the vehicle is started, wherein the decision-related information is used for influencing a decision process of the auxiliary driving module, and outputting the decision-related information in a cabin of the vehicle so as to enable passengers in the cabin to perceive the decision process.

Description

Presentation method and device for auxiliary driving decision process
Technical Field
The present invention relates to the field of driving assistance of vehicles, and in particular, to a method and an apparatus for presenting a driving assistance decision making process.
Background
At present, the driving assistance function is widely applied to various vehicles. With the increasing development of auxiliary driving technologies, the opacity of the auxiliary driving logic is increased, the interpretability is gradually reduced, and a user often has difficulty in knowing the reason and the specific decision process of a certain auxiliary driving decision, so that the trust degree and the use intention of the user on the auxiliary driving function are reduced, and improvement is needed.
In the related art, a warning is often given to a driver when a boundary condition of a system function (such as a sensor failure, a function failure, etc.) is triggered. However, this approach only allows the user to learn the cause or result of the fault, and still does not know the specific decision making process of the assisted driving.
Disclosure of Invention
In view of the above, the present invention provides a method and apparatus for presenting a driving assistance decision making process, so as to solve the deficiencies in the related art by outputting decision-making related information of a driving assistance module.
Specifically, the invention is realized by the following technical scheme:
According to a first aspect of the present invention, there is provided a decision-related information method comprising:
Under the condition that a vehicle starts an auxiliary driving function, acquiring decision-related information corresponding to an auxiliary driving module of the vehicle, wherein the decision-related information is used for influencing a decision process of the auxiliary driving module;
Outputting the decision-related information in a cabin of the vehicle to make an occupant in the cabin perceive the decision process.
According to a second aspect of the present invention there is provided a presentation device for assisting a driving decision process, comprising:
The information acquisition unit is used for acquiring decision-related information corresponding to an auxiliary driving module of the vehicle under the condition that the auxiliary driving function of the vehicle is started, wherein the decision-related information is used for influencing a decision process of the auxiliary driving module;
and the information output unit is used for outputting the decision-related information in the cabin of the vehicle so as to enable passengers in the cabin to perceive the decision process.
According to a third aspect of the present invention, there is provided a vehicle comprising:
a processor, a memory for storing processor-executable instructions, and a driving assistance module for providing driving assistance functionality;
wherein the processor implements the method of the preceding first aspect by executing the executable instructions.
According to a fourth aspect of the present invention there is provided a computer readable storage medium having stored thereon computer instructions which when executed by a processor perform the steps of the method of the first aspect described above.
The technical scheme provided by the embodiment of the invention can comprise the following beneficial effects:
As can be seen from the above embodiments, in the case that the vehicle has started the auxiliary driving function, the present solution obtains the decision-related information affecting the decision process of the auxiliary driving module of the vehicle, and outputs the information in the cabin so that the occupant in the cabin perceives the decision process of the auxiliary driving module.
It will be appreciated that the decision-related information affects the decision-making process of the driving assistance module, i.e. affects the module to perform at least one key step (e.g. perception, prediction, regulation, etc.) in the decision-making link. Therefore, by outputting the above-mentioned decision-related information to the passenger, not only the interpretability of the decision-making process performed by the driving-assisting module is improved, but also the passenger can accurately and comprehensively learn the presented decision-making process according to the information, thereby being beneficial to improving the decision-making transparency of the driving-assisting module and reducing the understanding difficulty of the passenger, and further being beneficial to improving the trust degree and the use willingness of the passenger to the driving-assisting function.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the following description will make a brief introduction to the drawings used in the description of the embodiments or the prior art. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of an architecture of a driving assistance system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a presentation method of a driving assistance decision making process according to an embodiment of the present invention;
FIG. 3 is a flow chart of another method of presenting a driving assistance decision making process, shown in an embodiment of the invention;
FIG. 4 is a schematic diagram showing a display effect of decision-related information according to an embodiment of the present invention;
Fig. 5 is a schematic view of a camera corresponding to an off-vehicle image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an evaluation mode of a current installation state according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a path scoring process according to an embodiment of the present invention;
FIG. 8 is a schematic block diagram of a vehicle according to an embodiment of the present invention;
Fig. 9 is a block diagram of a presentation device that assists in a driving decision process, as shown in an embodiment of the invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the invention. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
At present, the driving assistance function is widely applied to various vehicles. With the increasing development of auxiliary driving technologies, the opacity of the auxiliary driving logic is increased, the interpretability is gradually reduced, and a user often has difficulty in knowing the reason of a certain auxiliary driving decision and a specific decision process, namely, a complete decision link including key steps such as perception, prediction, regulation (i.e. planning and control) cannot be known. This results in reduced confidence and use of the auxiliary driving function by the user, which is not conducive to popularization and application of the auxiliary driving function, and improvement is desired.
In the related art, a reminder is often given to the driver only when a boundary condition of a system function (such as a sensor failure, a functional failure, etc.) is triggered. However, the method only enables the user to know the fault cause or result, and still cannot know the specific decision process of the auxiliary driving, i.e. cannot know the complete decision link, so that the technical problem is not solved effectively.
Therefore, the application provides a brand new presentation scheme of the auxiliary driving decision process, namely, the decision-making related information of the auxiliary driving decision process is obtained and actively displayed to the passenger, so that a user can accurately and comprehensively know the specific decision process of the auxiliary driving module when the auxiliary driving function is realized, the interpretability and the transparency of the auxiliary driving are improved, and the trust degree and the use willingness of the passenger on the auxiliary driving function are further improved. The present application will be described in detail with reference to the drawings and examples.
Fig. 1 is a schematic diagram of a hardware architecture of a driving assistance system according to an embodiment of the present application, and as shown in fig. 1, the system may include only a vehicle 11 from a hardware perspective, or may include both the vehicle 11 and a server 13. If the driving assistance system includes only the vehicle 11, the driving assistance module (a neural network or a visual language model VLM, etc., as described below) may be deployed in the vehicle 11, such as in a domain controller of the vehicle 11. If the driving assistance system includes both the vehicle 11 and the server 13, the driving assistance module may be deployed in the server 13, which is not described herein.
In addition to the domain control, the vehicle 11 may be equipped with various types of sensors for collecting the vehicle exterior environment data, wherein the number of any type of sensors may be one or more, and the number of the sensors and the installation positions thereof are not limited in the embodiment of the present application. Illustratively, the sensor may include a camera (such as a front view camera 111, a side view camera 112, a rear view camera 113, etc.), a laser radar 114, a millimeter wave radar, an ultrasonic radar, etc., and may further include an radiometer (for detecting illumination intensity), a rain gauge (for detecting current rainfall), etc., which will not be described again. Of course, the vehicle may also be equipped with at least one in-vehicle sensor (e.g., in-vehicle camera, in-vehicle microphone, in-vehicle biosensor, in-vehicle smell sensor, etc.) at a suitable location for implementing a corresponding in-vehicle function, which is not limited by the embodiment of the present application.
In addition, the vehicle may establish a network connection with a remote server 13 through a wireless communication module to perform data interaction with the server 13. For example, if the driving support module is disposed in the server 13, the vehicle 11 may transmit the environmental data collected by the sensor to the server 13 and receive the returned decision-related information, the vehicle control instruction, and the like. The server 13 may be a physical server including an independent host, or may be a virtual server, a cloud server, or the like, which is carried by a host cluster. In addition, the number, type, specific interaction manner with the vehicle, and the like of the servers 13 are not limited in the embodiment of the present application. As for the network 10 for interaction between the vehicle 11 and the server 13, communication using a wireless network of a corresponding type may be specifically selected based on a communication manner supported by the corresponding device, which is not limited by the present application.
In addition, the vehicle (e.g., the vehicle 11) according to the present application may be a pickup truck, a sedan, an SUV (Sport Utility Vehicle ) as a house, a truck, etc., and may be a fuel vehicle or a new energy vehicle (e.g., a hybrid vehicle, a pure electric vehicle, a hydrogen energy vehicle, a methanol energy vehicle, etc.) as a power form, and the present application is not limited to the specific form of the vehicle. The passenger 12 in the cabin may be a driver sitting in a driver seat, or may be at least one passenger sitting in another position, and the number of passengers and the riding position thereof are not limited in the present application. Of course, the driving assisting function of the present application is used for assisting the driver to drive the vehicle, and will not be described again.
The driving assistance module of the embodiment of the application can be used for carrying out driving assistance decision according to related data (such as environmental data acquired by the sensor), and a complete decision link of the driving assistance module can comprise key steps of sensing, prediction, regulation and the like. The method can be used for identifying at least one driving sensitivity factor such as lane lines, traffic lights and pedestrians in a sensing stage, predicting high-risk factors such as pedestrian invasion risks, vehicle collision risks and road wet skid risks based on the driving sensitivity factors and operation parameters thereof in a prediction stage, planning a path in a regulation stage, and controlling the vehicle to run according to the planned path (particularly, controlling the vehicle to turn, accelerate, decelerate and the like). The scheme is used for outputting the decision-related information (namely, one or more kinds of information affecting at least one key step) corresponding to the auxiliary driving module to the passenger of the vehicle, so that the passenger can know the auxiliary driving module to perform the decision-making specific process through the information.
FIG. 2 is a flow chart of a method of presenting a driving assistance decision making process for a vehicle, according to an embodiment of the invention, comprising the following steps 202-204.
Step 202, under the condition that the vehicle has started the auxiliary driving function, obtaining decision-making related information corresponding to an auxiliary driving module of the vehicle, wherein the decision-making related information is used for influencing a decision making process of the auxiliary driving module.
In an embodiment, the decision-making related information can be obtained in real time and continuously output in all time periods after the auxiliary driving function is started, so that the decision-making process of the auxiliary driving function is presented in the whole course. Or in order to avoid the interference of the decision-related information output by the scheme to the passengers, a closing entrance of the decision-related information output function can also be provided for the passengers (namely, the users of the scheme), so that the users can close the decision-related information output function provided by the scheme through screen operation, gesture operation or voice control. Obviously, in the latter case, the present solution outputs the decision-related information to the occupant only when the output function of the decision-related information is turned on, and will not be described again.
In an embodiment, the driving assistance module of the present application may be built based on any form of neural network framework and trained in a supervised or unsupervised manner. By way of example, the driver assistance module may employ a Visual Language Model (VLM), which is a multi-modal artificial intelligence Model that combines Computer Vision (CV) and natural Language processing (Natural Language Processing, NLP) capabilities to simultaneously understand and process image (or video) and text information and to establish an association between the two, thereby accomplishing more complex tasks. The driving assisting module realized by the VLM can analyze and infer from various dimensions based on multi-modal data (such as corresponding types of environment data respectively acquired by various types of sensors), and finally, comprehensive decision is accurately and efficiently realized.
In the related art, after a vehicle starts a driving assisting function, surrounding environment information of the vehicle is usually output in real time, for example, a real-time three-dimensional image of a space where the vehicle is located is displayed in a central control screen, and driving sensitive factors such as other vehicles, pedestrians, signal lamps and the like around the vehicle are displayed in real time in a 3D model form in the image, so that the perception capability of passengers (especially drivers) to the surrounding environment of the vehicle is improved, and the safety driving of the passengers is facilitated.
In this regard, it should be specifically noted that the decision-related information outputted in this embodiment is not the above-described surrounding environment information, but higher-dimensional, richer information that has a certain influence on the decision process of the auxiliary driving module. For example, as shown in fig. 4, the display content of the vehicle screen is shown, wherein the environment display area 401 identified by the dashed box displays the 3D model and the like of each object in the surrounding environment of the vehicle, which is not substantially different from the display manner in the related art. More critical, the information display area 402 identified by the dashed box displays the real-time image collected by the camera, the state description information of the current safety state, the decision-making related information such as multiple alternative paths rendered in real time, and the like, which are the decision-making process for presenting the auxiliary driving module to the passenger.
Step 204, outputting the decision-related information in a cabin of the vehicle to make an occupant in the cabin perceive the decision process.
After the decision-related information corresponding to the auxiliary driving module is obtained, the auxiliary driving system can output the information to passengers in the cabin so that the passengers can accurately know the decision-making process of the auxiliary driving module presented by the information. The decision-related information can be output to any passenger in the cabin, for example, only to a driver to assist the passenger to realize safe driving, or only to a rear passenger to timely sense the movement trend and rhythm of the vehicle and avoid carsickness and the like, or can be output to a copilot to timely remind the driver and the like after the copilot checks the information when the driver (i.e. the main driver) is inconvenient to check the information, and the details are not repeated.
In an embodiment, the decision-related information may be outputted in the cabin of the vehicle in a number of ways to cope with different scene demands. For example, a Display device mounted in the cabin may be called to Display the decision-related information, and the information may be in the form of text, image or video, and the Display device may be a screen, such as an instrument screen in front of a driver, a center control screen in a center position of a center console of a vehicle, a left shift screen facing a rear passenger and arranged behind a front left shift, or may be a HUD (Head-Up Display) device, such as a C-HUD (Combiner HUD), a W-HUD (WINDSHIELD HUD, a windshield HUD), an AR-UHD (Augmented Reality HUD, an augmented reality HUD), or the like. The following examples are described primarily in connection with the examples.
For another example, the audio device installed in the cabin may be called to play the voice corresponding to the decision-related information. Such as invoking a sound reading of the decision-related information in text form in the cabin so that the occupant, in particular the driver, can still learn about the decision-related information when it is inconvenient to view the screen or HUD.
For another example, when a network connection is established between the vehicle and the mobile terminal of the occupant, the mobile terminal is invoked to output the decision-related information. The mobile terminal may be any form of electronic device such as a mobile phone, a tablet device, a notebook computer, a Personal computer (PDAs, personal DIGITAL ASSISTANTS), a wearable device (such as smart glasses, smart watches, etc.), a VR (Virtual Reality) device, an AR (Augmented Reality ) device, etc. Based on the wired or wireless connection between the mobile terminal and the vehicle, the vehicle can send the decision-related information to the device for output, such as displaying information in text, picture or video form, or playing corresponding voice, etc., which will not be repeated.
In an embodiment, the decision-related information corresponding to the auxiliary driving module may include at least one of environmental data, lane data, path data, etc. to be input to the auxiliary driving module, driving sensitivity factors related to driving of the vehicle identified (i.e. predicted) by the auxiliary driving module, a driving path formulated (i.e. planned) by the auxiliary driving module, a vehicle control instruction for the driving path output by the auxiliary driving module, etc. It can be understood that the richer the output decision-related information is, the more is helpful for the passenger to accurately and comprehensively acquire the decision-making process of the driving assistance module, but the too many kinds or amounts of information may cause interference to the passenger, so that what kind of information is specifically output and what form of each kind of information is output can be comprehensively considered according to various factors such as vehicle function positioning, in-vehicle passenger identity, passenger number, in-vehicle voice volume and the like, and the embodiment of the application is not limited.
In an embodiment, when the decision-making related information is acquired, the high-risk object identified by the driving assistance module may be determined, and at least an external image including the high-risk object acquired by an external camera of the vehicle may be acquired. The determination of the high-risk object is to obtain the description information of the high-risk object, such as the position, size, shape, color, type, risk degree and the like of the object. In order to acquire the vehicle exterior image, only the vehicle exterior image containing the object can be acquired according to the position information of the high-risk object (for example, in the case that the vehicle B of the vehicle A is a high-risk object in the left lane, only the vehicle exterior image containing the vehicle B shot by the left camera can be acquired and displayed), and of course, the vehicle exterior images respectively acquired by all cameras can be acquired and displayed uniformly, which is not repeated.
Correspondingly, when the decision-related information is output, the vehicle exterior image can be displayed, and the high-risk object is marked at the corresponding position on the vehicle exterior image. It can be understood that the above-mentioned external image can be photographed and obtained in real time, so that the process of displaying the external image is actually a process of playing the external video photographed by the camera in real time. By the method, the passenger can be allowed to look into the image outside the vehicle to learn the real-time environment outside the vehicle, and the perception capability of the passenger to the external environment is enhanced. By marking the high-risk objects, passengers can more accurately and rapidly sense the information such as the risk factors and the positions thereof in the external environment, thereby being beneficial to early warning (such as steering wheel rotation or brake stepping) and avoiding accidents.
As shown in fig. 3, after the occupant turns on the auxiliary driving function, the identification information of the auxiliary driving ai+ can be seen (the display effect can be seen from the identification 410 shown in fig. 4). After that, on the one hand, according to the position information of the high-risk object, at least one corresponding target camera (namely the 'determined target camera number') can be queried from the driving assistance decision data related to the driving assistance module, and then the image or video outside the vehicle shot by the target camera can be acquired and displayed. On the other hand, the position of the high-risk object can be mapped to the corresponding position in the image or video outside the vehicle according to the position information of the high-risk object, and the position is identified.
The high-risk object can be marked in various modes. For example, in view of the fact that the high-risk object is already contained in the image or video outside the vehicle, the display parameters of the high-risk object in the image or video can be adjusted to highlight the high-risk object, or a risk identifier can be displayed at the position where the high-risk object is located to highlight the high-risk object, wherein the highlighting degree of the high-risk object can be positively correlated with the risk degree of the high-risk object so as to present the risk and the emergency degree of the object.
As shown in fig. 5, the system behavior corresponding to several typical driving assistance intents and the corresponding camera correspondence are shown. For example, when the system behavior is that the horizontal instruction is about to respond, the corresponding cameras (or called cameras) are front wide-angle cameras and two side rear cameras, and when the system behavior is that the vertical instruction is about to respond, the corresponding cameras (or called cameras) are front wide-angle cameras, which are not described in detail.
As shown in the flow chart below fig. 5, after multiple paths of video streams obtained by shooting by multiple cameras are obtained, the perception result of the high-risk object can be obtained, and based on the result, the processing such as ID tracking, target identification, coordinate system conversion and the like is performed, the accurate position of the high-risk object in the image outside the vehicle is finally determined, and the corresponding high-risk object is accurately marked at the corresponding position in the process of rendering the video stream.
In the above embodiment, it is assumed that a vehicle C (SUV) is present in the right lane of the host vehicle A (truck), and a plurality of oncoming vehicles are present in the left lane, wherein the nearest vehicle D. It can be seen that the vehicles C and D are high-risk objects with respect to the vehicle a, and at this time, the vehicle exterior video 405 of the vehicle a may be displayed in the right information display area 402 of the center control screen of the vehicle a (the videos displayed in the three small windows are captured by the front view camera, the left side camera, and the right side camera, respectively). In the videos of the outside of the vehicle captured by the front view camera and the right view camera, the vehicle C is respectively marked with a dangerous thermal identifier 406C, and in the videos of the outside of the vehicle captured by the left view camera, the vehicle D is marked with a dangerous thermal identifier 406D. Moreover, since vehicle C is closer to vehicle A and the risk is relatively higher, logo 406d is darker and more area than logo 406C.
In addition, avoidance instruction information for the high-risk objects can be further displayed on the out-of-vehicle image so as to instruct passengers (such as a driver of a self-vehicle) to avoid the high-risk objects, and risk response efficiency is improved. As shown in fig. 4, the screen may further display a text such as "right vehicle is too close to, please avoid to the left" in the current lane or play a corresponding voice for the vehicle C, or display a text such as "opposite vehicle is too fast and double yellow lines are pressed for the vehicle D, please flash light to remind and avoid to the right" in the current lane or play a corresponding voice. Of course, since the above-mentioned driving support function is turned on, the driving support module may directly output the avoidance command for the high-risk object, so as to directly control the avoidance of the vehicle, further shorten the response path and delay, and further improve the risk response efficiency. In addition, the avoidance indication information may also be an indication identifier on the current path, such as an offset path displayed in the environment display area 401 shown in fig. 4 or a steering arrow displayed in the information display area, which is not described herein.
In an embodiment, when the decision-related information is acquired, external environment data (which is used to describe an external environment in which the vehicle is currently located) to be input to the driving assistance module may be acquired, and the current safety state of the external environment may be estimated based on the external environment data. Or the auxiliary driving module can directly acquire the evaluation result of the current safety state, which is output after being inferred based on the external environment data. Accordingly, when the decision-related information is displayed, the state description information of the current safety state may be output so that the occupant knows whether the external environment is safe (or dangerous) and the degree of safety (or dangerous) thereof based on the information.
The external environment data may be used to describe, from multiple dimensions, an external environment in which the vehicle is currently located, for example, the external environment data may include natural environment data (such as a period to which a current time belongs, current weather, current illumination, and the like), lane data (such as a lane width, a lane type, a traffic direction, and the like), and/or path data (such as traffic flow congestion conditions, obstacle distribution conditions on a road, and the like). Based on this, when evaluating the current security state of the external environment based on the external environment data, a weighted calculation may be performed from the plurality of dimensions based on the external environment data, and a security level (e.g., excellent, good, general, poor, etc.) and/or a security score (a specific security score, e.g., 0 to 100 points, the larger score indicating the safer external environment) for characterizing the current security state may be determined according to the calculation result, as shown in fig. 3.
As shown in fig. 6 (a), the weighting operation may be performed based on external environment data such as natural environment data, lane data, and route data, and the operation result may be mapped to any one of three security levels, that is, excellent, good, and general. As shown in fig. 6 (b), the security level of the current security state (i.e. "system state: good" in the figure) may be determined based on the lane data and the path data, the security level of the current security state may be determined based on the natural environment data, and the like, which will not be described again. It can be seen that the safety level is positively correlated with the weather condition, the lane condition, the congestion level of the path, and the like, respectively.
In addition, when the state description information of the current safety state is a state description text, the state description text of the current safety state output by the driving assistance module may be received when the driving assistance module is a VLM, that is, the current safety state is highly summarized by means of the text generating capability of the VLM and the corresponding state description text is output. Or generating a state description text of the current safety state according to a preset description text template based on the evaluation result of the current safety state, and generating the state description text by taking the preset description text template as a spam strategy at the moment to ensure that the text generation is not null.
Similarly to the safety level of the aforementioned current safety state, when acquiring the decision-related information, external environment data (which is used to describe the external environment in which the vehicle is currently located) to be input to the auxiliary driving module may be acquired, and the current driving scenario of the vehicle may be identified based on the external environment data. Or the recognition result of the current driving scene, which is output by the auxiliary driving module after being inferred based on the external environment data, can be directly obtained. Accordingly, when the decision-related information is displayed, scene description information of the current driving scene may be output. As shown in fig. 3, the scene description text may be generated according to a preset scene corpus (i.e., a preset description text template).
When the scene description information of the current driving scene is the scene description text, the scene description text of the current driving scene output by the driving assisting module can be received when the driving assisting module is a VLM. Or, based on the recognition result of the current driving scene, a scene description text of the current driving scene is generated according to a preset description text template, which is not repeated. For example, the above-described scene description information may be used to describe static (overhead) obstacles, other vehicles in traffic flow, VRUs (Vulnerable Road User, vulnerable road users such as pedestrians, cyclists, motorcyclists, etc., which are often vulnerable to injury upon occurrence of traffic accidents due to lack of metal casing protection such as automobiles), etc., and will not be described again.
As shown in fig. 4, the scene description information of the current driving scene displayed in the information display area 402 is "backlit scene, please drive with caution", and the state description text of the displayed safe current safety state is "the current driving environment safety level is good, i.e., the front road is wide, the right vehicle speed is low, and the number of subject vehicles is small".
In an embodiment, when the decision-related information is acquired, a plurality of alternative paths output by the auxiliary driving module and a path score of each alternative path (at this time, each alternative path planned by the auxiliary driving module is comprehensively scored) may be acquired, or a plurality of alternative paths output by the auxiliary driving module are acquired, and a path score of each alternative path is calculated (at this time, each alternative path planned by the auxiliary driving module is comprehensively scored by the auxiliary driving system). Based on this, when outputting the decision-related information, the plurality of alternative paths and the path score of each alternative path may be sequentially displayed in order of the path score from high to low. In which the current speed may not be zero in view of the fact that the vehicle has turned on the auxiliary driving function, or other driving-sensitive factors around it may vary, and the alternative paths at different moments tend to be different. In this regard, the auxiliary driving system may render each alternative path at the auxiliary driving module planning site in real time, and sequentially display the path scores of the alternative paths, so that the passenger can accurately learn the relative goodness of the alternative paths by looking at the scores.
The path score of each alternative path may be comprehensively calculated from at least two dimensions of driving safety (e.g., may be characterized by the magnitude of collision probability), compliance with regulations (whether red light is broken, whether solid line is pressed, etc.), comfort (may be characterized by the magnitude of lateral, longitudinal, and/or vertical acceleration), driving efficiency (may be characterized by the expected time taken to reach a destination), anthropomorphic degree (i.e., whether the driving action complies with the driving habit of a human being), and the like when calculating the path score of each alternative path. For example, for any alternative path, a safe score, a legal score, a comfort score, an efficiency score, and an anthropomorphic score for the path may be calculated, and a path score for the path may be calculated (e.g., weighted) based on these scores.
As shown in fig. 7, the safety verification may be performed on each alternative path to screen out the safety alternative paths without safety risk or with safety risk lower than the threshold value, and then the comprehensive scoring is performed on each safety alternative path from the five dimensions to obtain corresponding path scores, and then the safety alternative paths are sorted according to the size of the path scores. And finally, sequentially rendering and displaying all paths in real time according to the sorting result, such as an optimal path with a score of 98 points, an alternative safety path 1 with a score of 90 points and the like.
As shown in fig. 4, a plurality of alternative paths 408 and path scores for each alternative path are displayed in sequence under the information display area 402. In addition, the score of the candidate path with the highest score in each dimension is displayed, for example, the safety score, the comfort score and the efficiency score of the optimal path with the path score of 98 are displayed in the form of a radar chart, and are not repeated.
In an embodiment, in view of possible errors or even false detections of vehicle sensors, the driver assistance module may make decisions that do not conform to the current driving environment, such as due to the radar dead zone, resulting in the highest scoring alternative path not being the actual optimal path, even if driving along the path may create a traffic accident. In view of the fact that the viewing ability/angle, etc. of the occupants (in particular the driver) may be better than the sensor, there may be high risk objects that are observed by some occupants and not perceived by the driver assistance module. In this regard, in the event that the driver assist function has been turned on, an occupant (e.g., driver) may be permitted to intervene in the decision making process, such as permitting the user to select a path from among the various alternative paths as judged by himself. For example, the vehicle may be controlled to travel in accordance with any one of the plurality of alternative paths in response to a driver of the vehicle selecting the any one of the alternative paths.
In an embodiment, when the decision-related information is acquired, a vehicle control instruction output by the driving assistance module may be acquired, for example, a lateral instruction (such as a lane change instruction, an avoidance instruction, a detour instruction, a left/right turn instruction, etc.), a longitudinal deceleration instruction (such as a following instruction, a yellow flashing deceleration instruction, a red light braking instruction, a CUTIN intrusion deceleration instruction, etc.), or a longitudinal acceleration instruction (such as a following start instruction, a green light start instruction, etc.), which are shown in fig. 3. In this regard, when the decision-making related information is output, the instruction description information of the vehicle control instruction can be output, for example, when the auxiliary driving module sends out a red light braking instruction, voices such as red light braking are broadcasted, when the auxiliary driving module sends out a green light starting instruction, the edge of the control screen flashes three times to form green light spots and/or voices such as green light starting are broadcasted, and the details are not repeated. By the aid of the method, the passenger can be fully informed of the vehicle control instruction sent by the auxiliary driving module, so that the passenger can know the decision result of the auxiliary driving module.
In addition, for the car control instruction output by the auxiliary driving module, before the instruction or the instruction description information of the instruction is executed, the instruction can be checked (such as keyword check, etc.) so as to ensure that the instruction output by the auxiliary driving module accords with the current driving environment, thereby ensuring that the car control instruction output by the auxiliary driving module accords with the current driving environment and the auxiliary driving system can correctly output the corresponding instruction description information.
As can be seen from the above embodiments, in the case that the vehicle has started the auxiliary driving function, the present solution obtains the decision-related information affecting the decision process of the auxiliary driving module of the vehicle, and outputs the information in the cabin so that the occupant in the cabin perceives the decision process of the auxiliary driving module.
It will be appreciated that the decision-related information affects the decision-making process of the driving assistance module, i.e. affects the module to perform at least one key step (e.g. perception, prediction, regulation, etc.) in the decision-making link. Therefore, by outputting the above-mentioned decision-related information to the passenger, not only the interpretability of the decision-making process performed by the driving-assisting module is improved, but also the passenger can accurately and comprehensively learn the presented decision-making process according to the information, thereby being beneficial to improving the decision-making transparency of the driving-assisting module and reducing the understanding difficulty of the passenger, and further being beneficial to improving the trust degree and the use willingness of the passenger to the driving-assisting function.
Fig. 8 is a schematic structural view of a vehicle shown in an embodiment of the present invention. Referring to fig. 8, at the hardware level, the vehicle includes a processor 801, a network interface 802, a memory 803, a nonvolatile memory 804, and an internal bus 805, and may include hardware required by other services. One or more embodiments of the invention may be implemented on a software basis, such as by the processor 801 reading a corresponding computer program from the non-volatile storage 804 into the memory 803 and then running. Of course, in addition to software implementation, one or more embodiments of the present invention do not exclude other implementation, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following process flows is not limited to each logic unit, but may also be hardware or a logic device.
Fig. 9 shows a block diagram of a presentation device for assisting a driving decision process according to an embodiment of the invention. Referring to fig. 8, the device can be applied to the vehicle shown in fig. 8 to implement the technical scheme of the present invention. The device comprises:
An information obtaining unit 901, configured to obtain, when a vehicle has started an auxiliary driving function, decision-related information corresponding to an auxiliary driving module of the vehicle, where the decision-related information is used to influence a decision process of the auxiliary driving module;
An information output unit 902 for outputting the decision-related information in a cabin of the vehicle so that an occupant in the cabin perceives the decision process.
Optionally, the information output unit 902 is specifically configured to at least one of the following:
invoking display equipment assembled in the cabin to display the decision-related information;
Calling audio equipment assembled in the cabin to play the voice corresponding to the decision-related information;
and calling the mobile terminal to output the decision-related information under the condition that the vehicle and the mobile terminal of the passenger are connected by a network.
Alternatively to this, the method may comprise,
The information obtaining unit 901 is specifically configured to determine a high-risk object identified by the driving assistance module, and at least obtain an external image of the vehicle, which is acquired by an external camera of the vehicle and contains the high-risk object;
the information output unit 902 is specifically configured to display the external image of the vehicle, and label the high-risk object at a corresponding position on the external image of the vehicle.
Optionally, the information output unit 902 is specifically configured to:
and adjusting the display parameters of the high-risk objects to highlight the high-risk objects, or displaying dangerous marks at the positions of the high-risk objects to highlight the high-risk objects, wherein the highlighting degree of the high-risk objects is positively correlated with the dangerous degree of the high-risk objects.
Optionally, the apparatus further comprises an avoidance indicating unit 903, configured to:
and displaying avoidance indicating information aiming at the high-risk object on the vehicle exterior image.
Alternatively to this, the method may comprise,
The information acquisition unit 901 is specifically configured to:
external environment data to be input to the driving assistance module, which is used to describe an external environment in which the vehicle is currently located, and to evaluate a current safety state of the external environment and/or to identify a current driving scenario of the vehicle based on the external environment data, and/or,
Acquiring an evaluation result aiming at the current safety state and/or an identification result aiming at the current driving scene, which are output by the auxiliary driving module;
The information output unit 902 is specifically configured to output state description information of the current safety state and/or scene description information of the current driving scene.
Optionally, the external environment data is used to describe, from a plurality of dimensions, an external environment in which the vehicle is currently located, and the information obtaining unit 901 is specifically configured to:
and carrying out weighted calculation from the plurality of dimensions based on the external environment data, and determining a security level and/or a security score for representing the current security state according to a calculation result.
Optionally, the state description information of the current security state is a state description text, and the information obtaining unit 901 is specifically configured to:
The state description information of the current safety state is a state description text, the state description text is obtained, and the method comprises the steps of receiving the state description text of the current safety state output by the auxiliary driving module when the auxiliary driving module is a visual language model VLM, or generating the state description text of the current safety state according to a preset description text template based on the evaluation result of the current safety state, and/or,
The scene description information of the current driving scene is a scene description text, and the scene description text is obtained, wherein the scene description text of the current driving scene is received and output by the auxiliary driving module when the auxiliary driving module is a visual language model VLM, or the scene description text of the current driving scene is generated according to a preset description text template based on the recognition result of the current driving scene.
Alternatively to this, the method may comprise,
The information obtaining unit 901 is specifically configured to obtain a plurality of alternative paths output by the driving assistance module and a path score of each alternative path, or obtain a plurality of alternative paths output by the driving assistance module, and calculate a path score of each alternative path;
The information output unit 902 is specifically configured to sequentially display the multiple alternative paths and the path score of each alternative path according to the order of the path scores from high to low.
Optionally, the information acquisition unit 901 is specifically configured to:
the path score of each alternative path is comprehensively calculated from at least two dimensions of driving safety, compliance with regulations, comfort, driving efficiency and anthropomorphic degree.
Optionally, the vehicle control unit 904 is further configured to:
And controlling the vehicle to run according to any one of the alternative paths in response to the driver of the vehicle selecting the any one of the alternative paths.
Alternatively to this, the method may comprise,
The information obtaining unit 901 is specifically configured to obtain a vehicle control instruction output by the driving assistance module;
The information output unit 902 is specifically configured to output instruction description information of the vehicle control instruction.
Accordingly, the present invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of presenting a driving assistance decision making process as described in any one of the embodiments above.
Accordingly, the present specification also provides a computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the presentation method of a driving assistance decision making process as described in any one of the embodiments above.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.

Claims (13)

Under the condition that a vehicle starts an auxiliary driving function, acquiring decision-related information corresponding to an auxiliary driving module of the vehicle, wherein the decision-related information is used for influencing a decision process of the auxiliary driving module; the method for acquiring the decision-related information corresponding to the auxiliary driving module of the vehicle comprises the steps of acquiring external environment data to be input into the auxiliary driving module, wherein the external environment data are used for describing the external environment where the vehicle is currently located, and evaluating the current safety state of the external environment and/or identifying the current driving scene of the vehicle based on the external environment data;
The method comprises the steps of obtaining state description texts when state description information of the current safety state is the state description texts, wherein the state description texts comprise receiving the state description texts of the current safety state output by the auxiliary driving module when the auxiliary driving module is a visual language model VLM, or generating the state description texts of the current safety state according to a preset description text template based on an evaluation result of the current safety state, and/or obtaining the scene description texts when the scene description information of the current driving scene is the scene description texts, wherein the scene description texts comprise receiving the scene description texts of the current driving scene output by the auxiliary driving module when the auxiliary driving module is the visual language model VLM, or generating the scene description texts of the current driving scene according to the preset description text template based on an identification result of the current driving scene.
CN202510855324.1A2025-06-252025-06-25 Method and device for presenting assisted driving decision-making processActiveCN120363931B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510855324.1ACN120363931B (en)2025-06-252025-06-25 Method and device for presenting assisted driving decision-making process

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510855324.1ACN120363931B (en)2025-06-252025-06-25 Method and device for presenting assisted driving decision-making process

Publications (2)

Publication NumberPublication Date
CN120363931A CN120363931A (en)2025-07-25
CN120363931Btrue CN120363931B (en)2025-09-02

Family

ID=96439240

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510855324.1AActiveCN120363931B (en)2025-06-252025-06-25 Method and device for presenting assisted driving decision-making process

Country Status (1)

CountryLink
CN (1)CN120363931B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115743150A (en)*2022-11-112023-03-07香港理工大学深圳研究院Interpretable automatic driving decision system and method
CN118457622A (en)*2024-04-302024-08-09山东工业职业学院Auxiliary driving method and system based on artificial intelligence
CN119078867A (en)*2024-08-302024-12-06上海朗尚传感技术有限公司 A safety assessment-based assisted driving method, device, equipment and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115525715A (en)*2022-10-252022-12-27中国第一汽车股份有限公司Visual analysis method, system and device for automatic driving data stream
DE102023119010A1 (en)*2023-07-192025-01-23Dr. Ing. H.C. F. Porsche Aktiengesellschaft Method and system for generating scenarios for testing and validating automated driving functions
CN119919842A (en)*2023-10-302025-05-02浙江极氪智能科技有限公司 Vehicle assisted driving method, device and electronic equipment
CN118467801A (en)*2024-05-072024-08-09中国科学院自动化研究所 Visual interpretation method and system for human-computer collaborative decision making
CN120141523A (en)*2025-03-052025-06-13蘑菇车联信息科技有限公司 Method, device, electronic device and computer program product for displaying front guide line of vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115743150A (en)*2022-11-112023-03-07香港理工大学深圳研究院Interpretable automatic driving decision system and method
CN118457622A (en)*2024-04-302024-08-09山东工业职业学院Auxiliary driving method and system based on artificial intelligence
CN119078867A (en)*2024-08-302024-12-06上海朗尚传感技术有限公司 A safety assessment-based assisted driving method, device, equipment and medium

Also Published As

Publication numberPublication date
CN120363931A (en)2025-07-25

Similar Documents

PublicationPublication DateTitle
JP7399075B2 (en) Information processing device, information processing method and program
JP7263233B2 (en) Method, system and program for detecting vehicle collision
JP7003660B2 (en) Information processing equipment, information processing methods and programs
US11845464B2 (en)Driver behavior risk assessment and pedestrian awareness
CN113276769A (en)Vehicle blind area anti-collision early warning system and method
CN111137284A (en)Early warning method and early warning device based on driving distraction state
US12097892B2 (en)System and method for providing an RNN-based human trust model
CN112534487A (en)Information processing apparatus, moving object, information processing method, and program
WO2021241189A1 (en)Information processing device, information processing method, and program
CN109720348A (en)Car-mounted device, information processing system and information processing method
CN115877343A (en)Man-vehicle matching method and device based on radar target tracking and electronic equipment
TW202443509A (en)Alert modality selection for alerting a driver
CN112606831A (en)Anti-collision warning information external interaction method and system for passenger car
CN119037415B (en) Multimodal large model driving risk judgment method, system, medium and program product
JP2022041244A (en) In-vehicle display devices, methods and programs
CN114435374A (en) Measuring the driver's safe driving factor
JP2020083314A (en)Traveling safety control system using ambient noise and control method thereof
US20230045706A1 (en)System for displaying attention to nearby vehicles and method for providing an alarm using the same
CN111081045A (en)Attitude trajectory prediction method and electronic equipment
US11403948B2 (en)Warning device of vehicle and warning method thereof
JP7597541B2 (en) Driving assistance control device
WO2023132055A1 (en)Evaluation device, evaluation method, and program
US12311876B2 (en)Projected security zone
JP7198742B2 (en) AUTOMATED DRIVING VEHICLE, IMAGE DISPLAY METHOD AND PROGRAM
JP2020009371A (en) Driving evaluation device and driving evaluation method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp