Movatterモバイル変換


[0]ホーム

URL:


CN113597617A - Display method, display device, display equipment and vehicle - Google Patents

Display method, display device, display equipment and vehicle
Download PDF

Info

Publication number
CN113597617A
CN113597617ACN202180001862.4ACN202180001862ACN113597617ACN 113597617 ACN113597617 ACN 113597617ACN 202180001862 ACN202180001862 ACN 202180001862ACN 113597617 ACN113597617 ACN 113597617A
Authority
CN
China
Prior art keywords
display
target object
vehicle
user
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180001862.4A
Other languages
Chinese (zh)
Inventor
朱伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yinwang Intelligent Technology Co ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co LtdfiledCriticalHuawei Technologies Co Ltd
Publication of CN113597617ApublicationCriticalpatent/CN113597617A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请涉及辅助驾驶技术领域,提供了一种显示方法及装置、设备及车辆,通过对车辆外的目标区域进行红外补光,并采用红外成像的方式获取目标区域的红外图像的信息,根据获取的红外图像,确定目标对象在红外图像中的位置,并根据目标对象在红外图像中的位置,在显示区域显示目标对象的标注信息。本申请能够在驾驶过程中通过红外成像的方式对车辆外的目标对象进行检测,并将目标对象的标注信息进行显示,使得用户在车外照明情况不佳时,仍然能够根据所显示的标注信息,确定车辆外的目标对象及其位置,提高驾驶的安全性。

Figure 202180001862

The present application relates to the technical field of assisted driving, and provides a display method and device, equipment and vehicle. By performing infrared fill light on a target area outside the vehicle, and using infrared imaging to obtain information about the infrared image of the target area, according to the obtained The infrared image of the target object is determined, the position of the target object in the infrared image is determined, and the label information of the target object is displayed in the display area according to the position of the target object in the infrared image. The present application can detect the target object outside the vehicle by means of infrared imaging during the driving process, and display the label information of the target object, so that the user can still use the displayed label information when the lighting condition outside the vehicle is not good. , determine the target object and its position outside the vehicle, and improve the safety of driving.

Figure 202180001862

Description

Display method, display device, display equipment and vehicle
Technical Field
The present disclosure relates to the field of driving assistance technologies, and in particular, to a display method and apparatus, a device, and a vehicle.
Background
When driving at night, if the automobile runs in an environment with poor light, pedestrians, vehicles or other obstacles on the road can not be seen clearly, and the possibility of traffic accidents is increased easily. In addition, when the vehicle meeting with the opposite vehicle at night, the light of the opposite vehicle is dazzling, so that the driver often blindly sees the target objects such as the opposite vehicle or pedestrians on the road, and the possibility of traffic accidents is increased.
With the rapid development of Head Up Displays (HUDs) or Augmented Reality Head Up displays (AR-HUDs) in recent years, existing vehicle manufacturers provide a scheme for labeling and reminding pedestrians or obstacles possibly existing in front of a front windshield of a vehicle, however, the scheme is still limited by light conditions of driving at night, the problem that the pedestrians or the obstacles on a road cannot be clearly seen due to poor light or vehicles meeting at both sides is not solved, and therefore the labeling and reminding effects are also easily influenced greatly.
Disclosure of Invention
In view of this, the present application provides a display method, a display device, a display apparatus, and a vehicle, which can detect a target object outside the vehicle in an infrared imaging manner during driving, and display labeling information of the target object, so as to improve driving safety.
It should be understood that, in the solution provided in the present application, the display method may be performed by a display device or a part of the device in the display device, wherein the display device may be an AR-HUD, HUD or other device with a display function. Some of the devices in the display apparatus may be a processing chip, a processing circuit, a processor, or the like.
A first aspect of the present application provides a display method, including: and carrying out infrared light supplement on a target area outside the vehicle. And acquiring information of the infrared image of the target area, wherein the target area comprises a target object. The position of the target object in the infrared image is determined. And displaying the annotation information of the target object in a display area according to the position of the target object in the infrared image.
According to the method, the target area outside the vehicle is subjected to infrared light supplement, the information of the infrared image of the target area outside the vehicle is acquired in an infrared imaging mode to identify the target object appearing in front of the vehicle, and the display position of the label information is determined according to the position of the target object in the acquired infrared image, so that the generated label information can be displayed at the display position. By the method, the target object in front of the vehicle can be detected in time in the driving process, particularly under the condition of poor illumination at night, and the generated marking information is displayed on the corresponding position, so that a user can determine the target object outside the vehicle and the position of the target object according to the displayed marking information, and the driving safety is improved.
In one possible implementation manner of the first aspect, determining the position of the target object in the infrared image includes: the information of the infrared image is provided to an image recognition model, and the position of the target object in the infrared image is determined by the image recognition model.
Therefore, according to the acquired infrared image of the target area, the trained image recognition model can be adopted to recognize the target object in the infrared image, and the position of the target object in the infrared image is determined. The image recognition model can be obtained through training in a neural network or deep learning mode, and different image recognition models can be adopted to recognize the target object in the infrared image aiming at the recognition requirements of different types of target objects, so that the recognition success rate of the target object is improved.
In a possible implementation manner of the first aspect, the display position of the annotation information is related to a spatial position of the target object and a position of human eyes of a user.
By last, for realizing user's viewing experience, can confirm the display position of label information according to user's people's eye position and target object's spatial position, when showing label information according to the display position who determines, can make the label information that the user saw fuse with target object's position mutually to make the user when the car external lighting condition is not good, still can confirm target object and position outside the vehicle according to the label information that shows.
In a possible implementation manner of the first aspect, the display size of the annotation information is related to a display position of the annotation information, a position of human eyes of a user, and a size of the target object.
Therefore, the label information can be embodied in various forms, and the display size of the label information can be determined according to the position of human eyes of a user, the display position of the label information and the spatial position of the target object. For example, the labeling information may be embodied in a form of a prompt box, and the display size of the prompt box may be calculated by obtaining the eye position of the user, the spatial position of the target object, and the display position of the prompt box, so that the prompt box seen by the user can be matched with a pedestrian. Along with the driving of the vehicle, the distance between the target object and the vehicle is closer and closer, and then the size of the prompt box is increased, so that the user can intuitively perceive the position of the target object.
In a possible implementation manner of the first aspect, the displaying area is a displaying area of an augmented reality head-up display, and the displaying position of the annotation information related to the spatial position of the target object and the position of the human eyes of the user includes: the display position of the annotation information is determined by the first line of sight of the user. The first line of sight is a line of sight from a position of a human eye of the user to a spatial position of the target object.
In one possible implementation manner of the first aspect, the augmented reality head-up display has a display area located on a front windshield of the vehicle, and the display position of the annotation information is determined by a first line of sight of the user and includes: the display position of the annotation information is determined by the intersection of the first line of sight of the user and the front windshield of the vehicle.
Therefore, the display area in the method can be the display area of the augmented reality head-up display, and the augmented reality head-up display can perform projection display on the marked information according to the display position of the marked information. Specifically, the augmented reality head-up display may use a front windshield of the vehicle as a display area, determine a first sight line of the user according to a position of human eyes of the user and a spatial position of the target object, and determine an intersection point of the first sight line and the front windshield of the vehicle as a display position of the labeled information, so that the labeled information seen by the user and the target object are on the same sight line, thereby improving a display effect of the labeled information.
In one possible implementation of the first aspect, determining an intersection of the first line of sight and a front windshield of the vehicle comprises: and determining a first included angle of a connecting line between the target object and the image acquisition device relative to a horizontal direction based on the spatial position of the target object, wherein the horizontal direction is parallel to a front windshield of the vehicle. And determining a second included angle of a connecting line between the human eyes of the user and the target object relative to the horizontal direction based on the distance between the human eye position of the user and the image acquisition device and the first included angle. And determining the display position of the marked information on the front windshield based on the distance between the eyes of the user and the front windshield of the vehicle and the second included angle.
Therefore, a first included angle of a connecting line between the target object and the image acquisition device relative to the horizontal direction can be calculated and obtained based on the position of the target object in the infrared image and the horizontal field angle of the image acquisition device, a second included angle of the connecting line between the eyes of the user and the target object relative to the horizontal direction can be calculated and obtained based on the transverse distance between the eyes of the user and the image acquisition device and the first included angle, and the display position of the marked information on the front windshield can be calculated and obtained based on the second included angle and the distance between the eyes of the user and the front windshield of the vehicle. The calculation amount of the display position in the method is small, the calculation mode is based on a simple trigonometric function, the calculation amount is small, and the accurate display position can be quickly obtained.
In a possible implementation manner of the first aspect, the spatial position of the target object is related to the position of the target object in the infrared image, and internal and external parameters of an image acquisition device acquiring the infrared image.
In this way, the spatial position of the target object may specifically be the spatial position of the target object in the vehicle coordinate system. The method can calculate the distance between the target object and the image acquisition device based on the position of the target object in the infrared image, the installation height of the image acquisition device, the included angle of the visible ground and other parameters, and can determine the spatial position of the target object in the vehicle coordinate system based on the distance and the spatial position of the image acquisition device in the vehicle coordinate system.
In one possible implementation of the first aspect, the target object includes one or more of other vehicles, pedestrians, animals.
Therefore, the target object in the method can be a moving object such as other vehicles, pedestrians, animals and the like, and can also be a static object such as a road sign, a tree and the like, and the type of the target object can be specifically selected according to the requirements of the user, so that the customized requirements of the user are realized.
In a possible implementation manner of the first aspect, before determining the position of the target object in the infrared image, the method further includes: the infrared image is one or more of cropped, denoised, enhanced, smoothed, and sharpened.
Therefore, the range of the infrared supplementary lighting is limited, and the image acquisition range which can be covered by the image acquisition device cannot be completely covered, so that the infrared image can be cut, subjected to noise reduction, enhanced, smoothed or sharpened and the like before the acquired infrared image is identified, and a target object in the infrared image can be effectively and quickly identified conveniently.
A second aspect of the present application provides a display device comprising: and the light supplement module is used for performing infrared light supplement on a target area outside the vehicle. And the acquisition module is used for acquiring the information of the infrared image of the target area, wherein the target area comprises a target object. And the processing module is used for determining the position of the target object in the infrared image. And the sending module is used for displaying the labeling information of the target object in a display area according to the position of the target object in the infrared image.
In a possible implementation manner of the second aspect, the display position of the annotation information is related to a spatial position of the target object, a position of human eyes of a user, and a size of the target object.
In a possible implementation manner of the second aspect, the display size of the annotation information is related to the display position of the annotation information, the position of human eyes of the user, and the size of the target object.
In a possible implementation manner of the second aspect, the displaying area is a displaying area of an augmented reality head-up display, and the displaying position of the annotation information related to the spatial position of the target object and the position of the human eyes of the user includes: the display position of the annotation information is determined by a first line of sight of the user. The first line of sight is a line of sight from a position of a human eye of the user to a spatial position of the target object.
In one possible implementation manner of the second aspect, the displaying area of the augmented reality head-up display is located on a front windshield of the vehicle, and the displaying position of the annotation information is determined by a first line of sight of the user and includes: the display position of the annotation information is determined by the intersection of the first line of sight of the user and the front windshield of the vehicle.
In a possible implementation manner of the second aspect, the spatial position of the target object is related to the position of the target object in the infrared image, and internal and external parameters of an image acquisition device acquiring the infrared image.
In one possible implementation of the second aspect, the target object includes one or more of other vehicles, pedestrians, animals.
In one possible implementation manner of the second aspect, the processing module, before being configured to determine the position of the target object in the infrared image, is further configured to: the infrared image is one or more of cropped, denoised, enhanced, smoothed, and sharpened.
A third aspect of the present application provides a computing device comprising: a processor, and a memory having stored thereon program instructions, which when executed by the processor, cause the processor to perform the display method of the various aspects as provided in the first aspect and the various alternative implementations described above.
In one possible implementation, the computing device is one of an AR-HUD, HUD.
In one possible implementation, the computing device is a vehicle.
In one possible implementation, the computing device is one of a vehicle-mounted computer and a vehicle-mounted computer.
A fourth aspect of the present application provides an electronic apparatus, comprising: a processor and an interface circuit, wherein the processor accesses a memory through the interface circuit, the memory storing program instructions that, when executed by the processor, cause the processor to perform the display method in the various technical solutions as provided by the first aspect and the various alternative implementations described above.
In one possible implementation, the electronic device is one of an AR-HUD, HUD.
In one possible implementation, the electronic device is a vehicle.
In one possible implementation, the electronic device is one of a car machine and a car-mounted computer.
A fifth aspect of the present application provides a display system comprising: the vehicle-mounted device, the computing devices in the multiple technical solutions provided by the third aspect and the various optional implementations coupled with the vehicle-mounted device, or the electronic devices in the multiple technical solutions provided by the fourth aspect and the various optional implementations coupled with the vehicle-mounted device.
In one possible implementation, the display system is a vehicle.
A sixth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored, which, when executed by a computer, cause the computer to perform the display method in the various aspects as provided in the first aspect and the various alternative implementations described above.
A seventh aspect of the present application provides a computer program product, which includes program instructions, and when the program instructions are executed by a computer, the computer executes the display method in the various technical solutions provided in the first aspect and the above-mentioned various alternative implementations.
In summary, according to the display method, the device, the equipment and the vehicle provided by the application, the target area outside the vehicle is detected and collected in real time through an infrared imaging mode to identify the target object in the infrared image, the position of the target object in the infrared image is determined, so that the spatial position of the target object is obtained, the display position of the annotation information is determined according to the spatial position of the target object and the positions of human eyes of a user, and the generated annotation information is displayed at the display position. Therefore, the labeling information seen by human eyes can be fused with the position of the target object to remind the user of paying attention to the target object outside the vehicle. Meanwhile, the size of the labeling information can be determined according to the position of human eyes of a user, the display position of the labeling information and the size of the target object, so that the size of the labeling information is gradually increased along with the approach of the target object, and a better display effect is achieved. Through the method and the device, the target objects around the vehicle can be detected in time in the driving process, especially under the condition of poor night illumination, the display of the labeled information is carried out, and the driving safety is improved.
Drawings
Fig. 1 is an architecture diagram of an application scenario of a display method provided in an embodiment of the present application;
FIG. 2 is an architectural diagram of a vehicle according to an embodiment of the present disclosure;
FIG. 3A is a schematic side view of a vehicle cabin according to an embodiment of the present disclosure;
FIG. 3B is a schematic front view of a vehicle cabin provided in an embodiment of the present application;
fig. 4 is a flowchart of a display method according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for determining a display position of annotation information according to an embodiment of the present disclosure;
FIG. 6 is a position distribution diagram of a front view angle of a vehicle and a pedestrian according to an embodiment of the present application;
FIG. 7 is a side view angle position profile of a vehicle and a pedestrian according to an embodiment of the present application;
FIG. 8 is a schematic top view of a vehicle and pedestrian in accordance with an embodiment of the present disclosure;
fig. 9 is an architecture diagram of a display device according to an embodiment of the present application;
FIG. 10 is an architecture diagram of a computing device according to an embodiment of the present application;
fig. 11 is an architecture diagram of an electronic device according to an embodiment of the present application;
fig. 12 is an architecture diagram of a display system according to an embodiment of the present application.
It should be understood that the dimensions and forms of the various blocks in the block diagrams described above are for reference only and should not be construed as exclusive of embodiments of the present invention. The relative positions and the inclusion relations among the blocks shown in the structural schematic diagram are only used for schematically representing the structural associations among the blocks, and do not limit the physical connection mode of the embodiment of the invention.
Detailed Description
The technical solution provided by the present application is further described below by referring to the drawings and the embodiments. It should be understood that the system structure and the service scenario provided in the embodiments of the present application are mainly for illustrating possible implementation manners of the technical solutions of the present application, and should not be construed as the only limitations on the technical solutions of the present application. As can be known to those skilled in the art, with the evolution of the system structure and the appearance of new service scenarios, the technical solution provided in the present application is also applicable to similar technical problems.
It should be understood that the display scheme provided by the embodiment of the application comprises a display method and device, equipment and a vehicle. Since the principles of solving the problems of these solutions are the same or similar, some of the repeated parts may not be repeated in the following descriptions of the specific embodiments, but it should be understood that these specific embodiments are referred to and can be combined with each other.
When driving at night, if the vehicle is driven in an environment with poor light, target objects around the vehicle, such as pedestrians, other vehicles, small animals or other obstacles, may not be seen clearly, which easily increases the possibility of traffic accidents. In contrast, although the front of the vehicle can be illuminated by turning on the dipped headlights or the high beams of the vehicle, the illumination range of the dipped headlights is limited, and the high beams can appropriately increase the illumination range of the front of the vehicle, but the high beams are likely to interfere with pedestrians or other vehicles on the road, and users of the own vehicle are also likely to be interfered by the high beams of the opposite vehicle, so that a visual blind area is likely to occur during vehicle crossing, and an accident is likely to occur.
Therefore, the embodiment of the application provides a display method, a display device, display equipment and a vehicle, which can detect target objects around the vehicle in an infrared imaging mode in the driving process and display the marking information on the corresponding position of the target object, so that the marking information seen by a user is fused with the position of the target object, the purpose of reminding the user is achieved, and the safety of driving at night is improved. Wherein the user is typically a driver. The user may also be a passenger for passenger copiers or rear passengers, etc. The present application is described in detail below.
First, an application scenario according to the present embodiment will be briefly described. Fig. 1 is an architecture diagram of an application scenario of a display method provided in an embodiment of the present application, and as shown in fig. 1, the application scenario includes: light supplementingdevice 110, collectingdevice 120,processing device 130 and transmittingdevice 140. As shown in fig. 2, the application scenario of the present embodiment relates to avehicle 100, which may be a car for home use or a truck, and may also be a special vehicle such as an ambulance, a fire truck, a police truck or an engineering emergency car. Thelight supplement device 110, thecollection device 120, theprocessing device 130, and thetransmission device 140 may be mounted on the vehicle, and may be mounted on the outside of the vehicle or may be mounted inside the vehicle. The specific architecture of thevehicle 100 involved in the application scenario is described in detail below with reference to fig. 3A-3B.
As shown in fig. 3A, thelight supplement device 110 may be an infrared supplement lamp, an infrared emitter, or other device or combination of devices having an infrared emitting function, and may be disposed at the front of thevehicle 100, for example, at the headlight of the front of the vehicle, so as to facilitate wiring. It may also be arranged on the roof of the vehicle or on the side of the rear view mirror of the vehicle cabin facing the outside of the vehicle. The infrared light supplementing device is mainly used for conducting infrared light supplementing on target areas around a vehicle when the vehicle is driven at night with poor illumination, and the maximum field angle of thecollecting device 120 can be covered in the light supplementing range. Wherein this target area can be the place ahead of vehicle, side or rear, through carrying out infrared light filling to this target area tocollection system 120 can acquire comparatively clear infrared image when detecting and gathering. The target area may contain a target object to be detected and collected, and the target object may be an object such as another vehicle, a pedestrian, an animal, or another obstacle. When infrared light filling lamp was selected for use to this embodiment, can select for use powerful infrared light filling lamp (for example 30 watts), because the infrared light is invisible light, consequently selects for use powerful infrared light filling lamp also can not cause the influence to pedestrian or other vehicles on the road. The infrared light supplement lamp is an example of this embodiment, and in addition, other devices or apparatuses capable of emitting infrared rays may be selected, and this embodiment does not specifically limit the type, position, and number of thelight supplement device 110.
Theharvesting device 120 may include an off-board harvesting device and an on-board harvesting device. As shown in fig. 3A, the vehicle exterior collecting device may specifically adopt an infrared camera, a vehicle-mounted radar, or another device or a plurality of combined devices having an infrared image collecting function or an infrared scanning function, may be disposed on a top portion, a head portion, or a side of a rear view mirror of a vehicle cabin facing the outside of the vehicle, may be installed inside the vehicle, or may be installed outside the vehicle. The infrared image information acquisition method is mainly used for detecting and acquiring infrared image information of a target area of infrared supplementary lighting outside a vehicle. The target area may contain a target object to be detected and collected, and the target object may be an object such as another vehicle, a pedestrian, an animal, or another obstacle. The infrared image information may be a single infrared image, or may be an infrared image of one or more frames in the captured video stream. As shown in fig. 3B, the in-vehicle collection device may specifically adopt a vehicle-mounted camera, an eye detector, and the like, and in a specific implementation process, the in-vehicle collection device may be set according to a position as required, for example, may be set on a side of a column a, a column B of a vehicle cabin, or a rear view mirror of the vehicle cabin facing a user, may also be set in an area near a steering wheel and a center console, and may also be set in a position above a display screen behind a seat, and the like. The method is mainly used for detecting and collecting the human eye position information of the user in the vehicle cabin. The in-vehicle acquisition device can be one or a plurality of, and the position and the number of the in-vehicle acquisition device are not limited in the application.
TheProcessing device 130 may be an electronic device, specifically, a processor of a vehicle-mounted Processing device such as a vehicle-mounted computer or a vehicle-mounted computer, a conventional chip processor such as a Central Processing Unit (CPU), a Micro Control Unit (MCU), or a terminal hardware such as a mobile phone or a tablet. Theprocessing device 130 may have an image recognition model preset therein or may obtain an image recognition model preset in another device in the vehicle, and may recognize a target object in the infrared image according to the received infrared image information, determine a position of the target object in the infrared image, and generate labeling information corresponding to the target object, where the labeling information may be a prompt box, a highlight mark, an AR image, or the like, or may be a character, a leader line, or the like. And the spatial position of the target object can be determined according to the position of the target object in the infrared image, the human eye position of the user is determined according to the human eye position information acquired by the in-vehicle acquisition device, the display position of the labeling information of the target object is determined according to the spatial position of the target object and the human eye position of the user, and the determined labeling information and the display position of the labeling information are output to the sendingdevice 140.
The transmittingdevice 140 may be a HUD, an AR-HUD, or other device with a display function, and may be installed above or inside the center console of the vehicle cabin, and is mainly used for displaying the labeling information in the display area. The display area of the sendingdevice 140 can be a front windshield of the vehicle, and can also be a transparent screen which is displayed independently, so that light rays of the marking information sent by the sendingdevice 140 are reflected and then enter eyes of a user, the user can see the position of the marking information fused with a target object outside the vehicle when looking outside the vehicle through the front windshield or the transparent screen, the type or the position of the target object which appears outside the vehicle is reminded, the display effect of the marking information is improved, and the driving safety is improved.
Thelight supplement device 110, theacquisition device 120, theprocessing device 130, and the sendingdevice 140 may communicate data or instructions through wired communication (e.g., interface circuit) or wireless communication (e.g., bluetooth or wifi), respectively, for example, thelight supplement device 110 may receive a control instruction of theprocessing device 130 through bluetooth communication, and start light supplement to a target area outside the vehicle. After theacquisition device 120 acquires the infrared image information of the target area, the infrared image information may be transmitted to theprocessing device 130 through bluetooth communication. After thecollecting device 120 collects the eye position information of the user, the eye position information of the user can be transmitted to theprocessing device 130 through bluetooth communication. Theprocessing device 130 determines a target object in the infrared image and a spatial position of the target object according to the infrared image information, generates annotation information, calculates a display position of the annotation information according to the spatial position of the target object and the position information of human eyes of a user, and outputs the obtained annotation information and the display position of the annotation information to the transmittingdevice 140, and the transmittingdevice 140 displays the annotation information at the display position of the display area.
Through the structure, thevehicle 100 according to this embodiment can acquire infrared image information of a target area outside the vehicle in an infrared imaging manner to determine a target object in the target area, and can also acquire eye position information of a user through an in-vehicle acquisition device, and determine annotation information and a display position thereof based on the infrared image information and the eye position information of the user, and display the annotation information at the display position by using a sending device, so that the annotation information seen by the user can be fused with the target object outside the vehicle, thereby achieving the purpose of reminding the user, and improving the safety of driving at night.
Fig. 4 is a flowchart illustrating a display method provided in an embodiment of the present application, where the display method may be executed by a display device or a part of devices in the display device, such as an AR-HUD, a car, a processor, and the like, where the processor may be a processor of the display device, or a processor of a car-mounted processing device, such as a car machine or a car-mounted computer. The infrared imaging mode can be used for acquiring infrared image information of a target area outside the vehicle and displaying the label information at the corresponding position, so that a user can still determine a target object outside the vehicle and the position of the target object according to the displayed label information when the lighting condition outside the vehicle is not good, and the safety of driving at night is improved. As shown in fig. 4, the display method includes:
s410: performing infrared light supplement on a target area outside the vehicle;
in this embodiment, in the driving process of the night driving process or in the driving process of the not good road conditions of lighting conditions, the not good condition of infrared imaging effect that the lack illumination caused, can be through the mode that sets up the light filling device on the vehicle, for example, can be at the vehicle top, the head or the outside one side of orientation car of the rear-view mirror of vehicle passenger cabin set up infrared light filling lamp, wherein, the treater can pass through interface circuit, send the instruction of opening the light filling to infrared light filling lamp, in order to open this infrared light filling lamp, carry out infrared light filling to the target area outside the vehicle, and carry out real-time detection to the target area outside the vehicle through collection system, and acquire infrared image information in real time. For example, the acquisition device may be an infrared camera, and the infrared image information may include information such as resolution, size, dimension, color, and the like.
S420: acquiring information of an infrared image of the target area;
the processor can send an image acquisition instruction to the infrared camera through the interface circuit so as to control the infrared camera to acquire an infrared image of a target area outside the vehicle. By identifying the target object in the acquired infrared image, the target object may be a target object which can trigger an early warning in the driving process of the vehicle, and specifically may be a pedestrian, a vehicle, an animal or other obstacles. The processor may quickly identify a target object feature in the infrared image based on the identification model to determine a target object in the infrared image and a location of the target object in the infrared image. The recognition model can be realized through a neural network model or a deep learning model, and specifically, different recognition models can be adopted to recognize infrared images based on target objects in different forms, for example, when the target object is a pedestrian, the infrared images can be recognized through a portrait recognition model to determine the positions of the pedestrian and the pedestrian in the infrared images.
S430: determining a position of the target object in the infrared image;
in this embodiment, the processor may determine the target object in the infrared image and the position of the target object in the infrared image according to the acquired information of the infrared image, for example, when the target object is a pedestrian, because of different heights of the pedestrian, in this embodiment, for convenience of subsequent calculation, the position of the foot of the pedestrian in the infrared image may be used as the position of the target object in the infrared image to be determined. The processor can also generate the labeling information of the target object according to the type of the target object in the infrared image. The annotation information may be information with a reminding effect generated based on a target object in the infrared image, and may be, for example, a prompt box, a highlight mark, an arrow mark, or the like, or may be a prompt text, a guide line, or the like, or may be an AR image with an AR effect.
The processor can determine the space position of the target object based on the position of the target object in the acquired infrared image, the internal parameters, the external parameters and the like of the infrared camera, and then can determine the display position of the annotation information in the display area based on the space position of the target object and the space position of the eyes of the user. The spatial position of the target object and the spatial position of the user's eyes may be the spatial positions of the target object and the user's eyes respectively in the vehicle coordinate system.
In the flowchart of determining the display position of the annotation information shown in fig. 5, the display area may be a display area of the AR-HUD, and the display area of the AR-HUD may be located on a front windshield of the vehicle. In this embodiment, the determining the display position of the annotation information in the display area may be specifically implemented by:
s431: determining a spatial position of the target object based on a position of the target object in the infrared image;
as shown in fig. 6, which is a position distribution diagram of the front view angle between the vehicle and the pedestrian according to the embodiment, it can be considered that fig. 6 is an equivalent schematic diagram of the infrared image acquired under the field of view of the infrared camera, in which the target object is a pedestrian in front of the vehicle, specifically, the foot position of the pedestrian is taken as the position of the pedestrian in the infrared image, the horizontal direction is the farthest road surface shot by the infrared camera, the pixel distance from the pedestrian in the infrared image to the horizontal direction is a, the pixel distance from the head of the vehicle is b, and the pixel distances from both sides of the infrared image are c and d, respectively. Because each parameter of the infrared camera can be determined when the infrared camera is installed, the horizontal direction, the vehicle head and the positions on the two sides in the infrared image collected by the infrared camera are all fixed parameters, and the distance of the pedestrian relative to the infrared camera can be calculated based on the two-dimensional position of the pedestrian in the infrared image.
Fig. 7 is a position distribution diagram of side view angles of a vehicle and a pedestrian according to this embodiment, in fig. 7, according to the installation position of the infrared camera, it may be determined that the height of the installation position of the infrared camera from the ground is H, and it may also be determined that an included angle between a nearest road surface visible to the infrared camera and a farthest road surface is θ, where the unit of the included angle θ may be an arc, and since the installation position of the infrared camera may be fixed at a certain position of the vehicle, the height H and the included angle θ may be determined as known parameters. Based on the trigonometric function, the horizontal distance L between the pedestrian and the infrared camera may be calculated as:
Figure BDA0003160489530000091
after simplification, the following results are obtained:
Figure BDA0003160489530000092
based on the calculated horizontal distance L between the pedestrian and the infrared camera and the spatial position of the infrared camera in the vehicle coordinate system, the spatial position of the pedestrian in the vehicle coordinate system, that is, the spatial position of the target object, can be determined.
S432: acquiring the position information of human eyes of a user;
the eyes of the user are detected through the in-vehicle acquisition device, and the in-vehicle acquisition device can be a camera and can also be an eyeball detector. According to the obtained position information of the eyes of the user and the conversion relation between the installation position of the in-vehicle acquisition device and the vehicle coordinate system, the spatial position of the eyes of the user in the vehicle coordinate system can be obtained.
S433: determining a display position of the labeling information based on the spatial position of the eyes of the user and the spatial position of the target object;
in this embodiment, when displaying the annotation information, the annotation information may be specifically sent to a front windshield of the vehicle for displaying, so as to facilitate the user to view at head. The marked information can be displayed at the fixed position of the front windshield, a user does not need to change the sight line in the driving process, and whether target objects such as pedestrians, other vehicles and animals exist outside the vehicle or not can be determined by looking up the marked information at the fixed position of the front windshield. The marked information can also be displayed at different positions of the front windshield, specifically, a connecting line between the eyes of the user and the pedestrian can be determined based on the spatial position of the eyes of the user and the spatial position of the pedestrian, and then the intersection point between the connecting line and the front windshield of the vehicle can be determined as the display position of the marked information on the front windshield. The spatial position of the intersection point can be obtained by means of coordinate calculation, specifically, a vehicle coordinate system can be used as a reference coordinate system, and then the spatial coordinate of the intersection point on a front windshield is determined based on the spatial coordinate of the eyes of a user, the spatial coordinate of a pedestrian and the spatial coordinate of the front windshield of the vehicle.
As shown in fig. 8, which is a position distribution diagram of a top view angle of a vehicle and a pedestrian according to the embodiment, in fig. 8, based on a spatial position of the user's eyes and a spatial position of the infrared camera, a lateral distance e between the user's eyes and the infrared camera can be obtained. The horizontal field angle of the infrared camera is lambda, an included angle of a connecting line between a pedestrian and the infrared camera relative to the front windshield is alpha, an included angle of a connecting line between the pedestrian and eyes of a user relative to the front windshield is beta, and the unit of the included angle alpha and the included angle beta can also be radian. The included angle α is calculated as:
Figure BDA0003160489530000101
after simplification, the following results are obtained:
Figure BDA0003160489530000102
based on the included angle alpha, the calculation mode of the included angle beta is as follows:
Figure BDA0003160489530000103
based on the calculated included angle beta and the distance between the eyes of the user and the front windshield, the spatial position of the intersection point between the line between the eyes of the user and the pedestrian and the front windshield of the vehicle can be calculated, and the display position of the marked information on the front windshield can be calculated.
The calculated display position is specifically a spatial position under a vehicle coordinate system, before projection, a spatial position coordinate under the vehicle coordinate system can be further converted into a projection coordinate under an AR-HUD coordinate system, and then the labeling information and the projection coordinate are sent to the AR-HUD.
S440: and displaying the labeling information of the target object in a display area.
In this embodiment, the processor may send the generated label information and the display position thereof to the AR-HUD through the interface circuit, and the AR-HUD projects the label information to the calculated display position on the front windshield for display, so that the label information seen by the user and the pedestrian are kept on the same sight line, and the user can quickly perceive the target object and the position thereof during driving.
In some embodiments, the display size of the annotation information may be a fixed size, for example, a fixed size prompt box, an AR image, etc. generated according to the target object, and may also be a fixed size text, a guiding line, etc. generated according to the target object. By sending the fixed-size labeling information to the display position for display, the user can be reminded of the existence of the target object outside the vehicle. In other embodiments, the display size of the annotation information can also be related to the display position of the annotation information, the spatial position of the user's eyes, and the size of the target object. According to the spatial position of the eyes of the user, the display position of the annotation information and the spatial position of the target object, a model similar to a visual cone can be formed, so that according to the size of the target object, the display size of the annotation information at the display position can be determined, the annotation information seen by the user can be matched with the target object, and the display size of the annotation information can be changed correspondingly along with the relative distance between the vehicle and the target object. For example, the labeling information may be embodied in a form of a prompt box, and the display size of the prompt box may be calculated by obtaining the spatial position of the eyes of the user, the spatial position of the pedestrian, and the display position of the prompt box, so that the prompt box seen by the user can be matched with the pedestrian. Based on this, as the vehicle travels, the distance between the pedestrian and the vehicle is closer and closer, the size of the prompt box can be increased accordingly, so as to remind the user of the existence of the pedestrian and the change of the distance between the pedestrian and the user.
In summary, the display method provided by the embodiment of the application detects the environment outside the vehicle by adopting an infrared imaging mode, and determines the display position of the labeling information based on the spatial position of the target object and the spatial position of the eyes of the user. This embodiment adopts the mode of infrared light filling and infrared imaging, has promoted the imaging ability under the not good condition of illumination, and wherein infrared imaging's effect can not influenced by visible light to infrared light filling also can not exert an influence to car or pedestrian on meeting, avoids the traffic hidden danger. In addition, the display position of the labeling information is calculated and obtained based on the spatial position of the eyes of the user and the spatial position of the target object, the calculation amount is small, and the occupation of vehicle processing resources is reduced. And displaying the label information according to the calculated display position, so that when the lighting condition outside the vehicle is not good, a user can still determine a target object outside the vehicle and the position of the target object according to the displayed label information, and the driving safety is improved.
Fig. 9 is an architecture diagram of a display device according to an embodiment of the present application, where the display device may be used to implement various alternative embodiments of the display method described above. As shown in fig. 9, the display device has a fill-inlight module 910, an obtainingmodule 920, aprocessing module 930, and a sendingmodule 940.
The fill-inmodule 910 is used to execute the step S410 in the display method and examples thereof. The obtainingmodule 920 is configured to execute step S420 in the display method and examples thereof. Theprocessing module 930 is configured to execute any step of S430, S431-S433 in the display method and any optional example thereof. The sendingmodule 940 is configured to execute step S440 in the display method and examples thereof. For details, reference may be made to the detailed description of the method embodiments, which is not repeated herein.
The display device provided by this embodiment acquires an infrared image in front of the vehicle in an infrared imaging manner, determines a display position of the annotation information according to a position of a target object in the infrared image and a position of eyes of a user inside the vehicle, and displays the annotation information at the corresponding display position through the display device. Through the display device, the marking information seen by the user can be matched with the target object, so that the user can quickly perceive the target object outside the vehicle and the position of the target object, and the driving safety is improved.
It should be understood that the display device in the embodiment of the present application may be implemented by software, for example, by a computer program or instructions having the above functions, and the corresponding computer program or instructions may be stored in a memory inside the terminal, and the processor reads the corresponding computer program or instructions inside the memory to implement the above functions. Alternatively, the display device according to the embodiment of the present application may also be implemented by hardware, for example, thelight supplement module 910 may be implemented by a light supplement device on a vehicle, for example, an infrared light supplement lamp or other devices that can implement an infrared light supplement function. Thisacquisition module 920 can be realized by the collection device outside the vehicle or in the vehicle on the vehicle, for example, the collection device outside the vehicle can be infrared camera, infrared radar, etc., and the collection device in the vehicle can be vehicle-mounted camera or eyeball tracker, etc., or, thisacquisition module 920 also can be realized by the interface circuit between infrared camera or vehicle-mounted camera on the processor and the vehicle. Theprocessing module 930 may be implemented by a processing device on a vehicle, for example, a processor of a vehicle-mounted processing device such as a vehicle-mounted computer or a vehicle-mounted computer, or may also be implemented by a processor of a HUD or an AR-HUD, or theprocessing module 930 may also be implemented by a terminal such as a mobile phone or a tablet. The sendingmodule 940 may be implemented by a display device on the vehicle, for example, by the HUD or the AR-HUD, or may also be implemented by a portion of the HUD or the AR-HUD, or may also be implemented by an interface circuit between the processor and the HUD or the AR-HUD. Alternatively, the display device in the embodiment of the present application may also be implemented by a combination of a processor and a software module.
It should be understood that, for details of processing of devices or modules in the embodiments of the present application, reference may be made to relevant expressions of the embodiments and relevant extended embodiments shown in fig. 1 to fig. 8, and details of the embodiments of the present application will not be repeated.
In addition, the embodiment of the application also provides a vehicle with the display device, and the vehicle can be a family car, a cargo vehicle and the like, and can also be a special vehicle such as an ambulance, a fire engine, a police vehicle or an engineering emergency vehicle and the like. Besides, the vehicle is also provided with the light supplementing device, the collecting device, the processing device, the transmitting device and the like. The modules and devices can be arranged in a vehicle system in a pre-installed or post-installed manner, wherein the modules can perform data interaction depending on a bus or an interface circuit of a vehicle, or with the development of a wireless technology, the modules can perform data interaction in a wireless communication manner, so that inconvenience caused by wiring is eliminated. In addition, the display device of the embodiment can be combined with the AR-HUD together and installed on the vehicle in the form of vehicle-mounted equipment, so that a better AR early warning effect is achieved.
Fig. 10 is an architecture diagram of a computing device 1000 provided by an embodiment of the present application. The computing device may be a terminal, or may be a chip or a chip system inside the terminal, and may be used as a display apparatus to execute each optional embodiment of the display method. As shown in fig. 10, the computing device 1000 includes: aprocessor 1010, amemory 1020.
It is to be appreciated that the computing device 1000 illustrated in fig. 10 may also include acommunication interface 1030 that may be used for communicating with other devices, and in particular may include one or more transceiver circuits or interface circuits.
Theprocessor 1010 may be coupled to thememory 1020. Thememory 1020 may be used to store the program codes and data. Therefore, thememory 1020 may be a memory module inside theprocessor 1010, an external memory module independent from theprocessor 1010, or a component including a memory module inside theprocessor 1010 and an external memory module independent from theprocessor 1010.
Computing device 1000 may also include a bus, among other things. Thememory 1020 and thecommunication interface 1030 may be connected to theprocessor 1010 through a bus. The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 10, but it is not intended that there be only one bus or one type of bus.
It should be understood that, in the embodiment of the present application, theprocessor 1010 may adopt a Central Processing Unit (CPU). The processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or theprocessor 1010 adopts one or more integrated circuits for executing related programs to implement the technical solutions provided in the embodiments of the present application.
Thememory 1020 may include both read-only memory and random access memory, and provides instructions and data to theprocessor 1010. A portion ofprocessor 1010 may also include non-volatile random access memory. For example,processor 1010 may also store device type information.
When the computing device 1000 runs, theprocessor 1010 executes the computer execution instruction in thememory 1020 to execute any operation step of the display method and any optional embodiment thereof, for example, theprocessor 1010 may execute the computer execution instruction in thememory 1020 to execute the display method in the embodiment corresponding to fig. 4, and when the vehicle runs on a road with poor illumination at night, theprocessor 1010 controls the light supplement device of the vehicle to perform infrared light supplement on the target area outside the vehicle by executing the light supplement instruction in thememory 1020. Theprocessor 1010 controls the vehicle acquisition device to acquire information of the infrared image of the target area by executing the acquisition instruction in thememory 1020. Theprocessor 1010 determines the position of the target object in the infrared image by executing the processing instructions in thememory 1020, and determines the display position of the annotation information according to the position of the target object in the infrared image. Theprocessor 1010 controls the vehicle transmission device to display the label information of the target object in the display area by executing the transmission instruction in thememory 1020.
It should be understood that the computing device 1000 according to the embodiment of the present application may correspond to a corresponding main body in executing the method according to the embodiments of the present application, and the above and other operations and/or functions of each module in the computing device 1000 are respectively for implementing corresponding flows of each method of the embodiment, and are not described herein again for brevity.
Fig. 11 is an architecture diagram of an electronic device 1100 according to an embodiment of the present application, where the electronic device 1100 may be used as a display device to perform various alternative embodiments of the display method, and the electronic device may be a terminal, or a chip system inside the terminal. As shown in fig. 11, the electronic device 1100 includes: aprocessor 1110, and aninterface circuit 1120, wherein theprocessor 1110 accesses a memory through theinterface circuit 1120, the memory storing program instructions that, when executed by the processor, cause the processor to perform any of the operational steps of the display method described above and any optional embodiment thereof. For example, theprocessor 1110 may obtain a computer execution instruction in the memory through theinterface circuit 1120 to execute the display method in the embodiment corresponding to fig. 4, and when the vehicle travels on a road with poor illumination at night, theprocessor 1110 obtains a light supplement instruction in the memory through theinterface circuit 1120 to control a light supplement device of the vehicle to perform infrared light supplement on a target area outside the vehicle. Theprocessor 1110 obtains the acquisition instruction in the memory through theinterface circuit 1120, and controls the acquisition device of the vehicle to acquire the information of the infrared image of the target area. Theprocessor 1110 obtains the processing instruction in the memory through theinterface circuit 1120, determines the position of the target object in the infrared image, and determines the display position of the annotation information according to the position of the target object in the infrared image. Theprocessor 1110 acquires a transmission instruction from the memory via theinterface circuit 1120, and controls the vehicle transmission device to display the label information of the target object in the display area.
In addition, the electronic device may further include a communication interface, a bus, and the like, which may specifically refer to the description in the embodiment shown in fig. 10 and are not described again.
Fig. 12 is an architecture diagram of a display system 1200 according to an embodiment of the present application, where the display system 1200 may be used as a display device to perform various optional embodiments of the display method, and the display system may be a terminal, or a chip system inside the terminal. As shown in fig. 12, the display system 1200 includes: avehicle device 1210, and anelectronic device 1220 coupled to thevehicle device 1210. Theelectronic apparatus 1220 may be theprocessing apparatus 130 shown in fig. 1, theprocessing module 930 shown in fig. 9, the computing device 1000 shown in fig. 10, or the electronic apparatus 1100 shown in fig. 11. The vehicle-mounteddevice 1210 may be thelight supplement device 110 shown in fig. 1 or thelight supplement module 910 shown in fig. 9, such as a vehicle headlight, an infrared light supplement lamp, and the like. The vehicle-mounteddevice 1210 may also be theacquisition device 120 on the vehicle shown in fig. 1 or theacquisition module 920 shown in fig. 9, such as a vehicle-mounted camera, an infrared camera, or a radar. Alternatively, the in-vehicle device 1210 may be the transmittingdevice 140 on the vehicle shown in fig. 1 or thetransmitting module 940 shown in fig. 9, such as an AR-HUD, etc., or a part of the AR-HUD, HUD. In this embodiment, the display system 1200 may execute the display method in the embodiment corresponding to fig. 4, for example, the in-vehicle device 1210 may perform infrared light supplement on a target area outside a vehicle, and the in-vehicle device 1210 may further perform infrared image acquisition on the target area outside the vehicle. Theelectronic device 1220 may determine the position of the target object in the infrared image according to the acquired infrared image containing the target object, and determine the display position of the annotation information according to the position of the target object in the infrared image. The in-vehicle device 1210 may send the annotation information to a display location for display. The in-vehicle device 1210 and theelectronic device 1220 may communicate data or instructions through a wired method (e.g., an interface circuit), and may also communicate data or instructions through a wireless method (e.g., bluetooth or wifi).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing device, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and the computer program is used for executing a display method when executed by a processor, wherein the method includes at least one of the schemes described in the above embodiments.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that the embodiments described in this application are only a part of the embodiments of the present application, and not all embodiments. The components of the embodiments of the present application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the above detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The terms "first, second, third and the like" or "module a, module B, module C and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order, it being understood that specific orders or sequences may be interchanged where permissible to effect embodiments of the present application in other than those illustrated or described herein.
In the above description, reference numbers indicating steps, such as S410, S420 … …, etc., do not necessarily indicate that the steps are executed according to the steps, and may include intermediate steps or be replaced by other steps, and the order of the previous and subsequent steps may be interchanged or executed simultaneously, where the case allows.
The term "comprising" as used in the specification and claims should not be construed as being limited to the contents listed thereafter; it does not exclude other elements or steps. It should therefore be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, and groups thereof. Thus, the expression "an apparatus comprising the devices a and B" should not be limited to an apparatus consisting of only the components a and B.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the application. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, in the various embodiments of the present application, unless otherwise specified or logically conflicting, terms and/or descriptions between different embodiments have consistency and may be mutually referenced, and technical features in different embodiments may be combined to form new embodiments according to their inherent logical relationships.
It should be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention.

Claims (21)

Translated fromChinese
1.一种显示方法,其特征在于,包括:1. a display method, is characterized in that, comprises:对车辆外的目标区域进行红外补光;Infrared fill light for the target area outside the vehicle;获取所述目标区域的红外图像的信息,其中,所述目标区域中包括目标对象;acquiring information of an infrared image of the target area, wherein the target area includes a target object;确定所述目标对象在所述红外图像中的位置;determining the position of the target object in the infrared image;根据所述目标对象在所述红外图像中的位置,在显示区域显示所述目标对象的标注信息。According to the position of the target object in the infrared image, the annotation information of the target object is displayed in the display area.2.根据权利要求1所述的方法,其特征在于,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置有关。2 . The method according to claim 1 , wherein the display position of the annotation information is related to the spatial position of the target object and the position of the user's human eyes. 3 .3.根据权利要求2所述的方法,其特征在于,所述标注信息的显示尺寸与所述标注信息的显示位置、用户的人眼位置、所述目标对象的尺寸有关。3 . The method according to claim 2 , wherein the display size of the annotation information is related to the display position of the annotation information, the position of the user's human eyes, and the size of the target object. 4 .4.根据权利要求2所述的方法,其特征在于,所述显示区域为增强现实抬头显示器的显示区域,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置有关包括:4 . The method according to claim 2 , wherein the display area is a display area of an augmented reality head-up display, and the display position of the label information is related to the spatial position of the target object and the position of the user's eyes. 5 . include:所述标注信息的显示位置是通过所述用户的第一视线确定的;所述第一视线是从所述用户的人眼位置到所述目标对象的空间位置的视线。The display position of the annotation information is determined by the first line of sight of the user; the first line of sight is a line of sight from the position of the user's human eyes to the spatial position of the target object.5.根据权利要求4所述的方法,其特征在于,所述增强现实抬头显示器的显示区域位于所述车辆的前挡风玻璃,所述标注信息的显示位置是通过所述用户的第一视线确定的包括:5 . The method according to claim 4 , wherein the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the marked information is through the first line of sight of the user. 6 . Certain include:所述标注信息的显示位置是通过所述用户的第一视线与所述车辆的前挡风玻璃的交点确定的。The display position of the marked information is determined by the intersection of the first line of sight of the user and the front windshield of the vehicle.6.根据权利要求2至5任意一项所述的方法,其特征在于,所述目标对象的空间位置与所述目标对象在所述红外图像中的位置、获取所述红外图像的图像采集装置的内参和外参有关。6. The method according to any one of claims 2 to 5, wherein the spatial position of the target object and the position of the target object in the infrared image, and the image acquisition device for acquiring the infrared image The internal and external parameters are related.7.根据权利要求1所述的方法,其特征在于,所述目标对象包括其他车辆、行人、动物中的一个或多个对象。7. The method of claim 1, wherein the target object includes one or more objects among other vehicles, pedestrians, and animals.8.根据权利要求1所述的方法,其特征在于,所述确定所述目标对象在所述红外图像中的位置之前,还包括:8. The method according to claim 1, wherein before the determining the position of the target object in the infrared image, the method further comprises:对所述红外图像进行裁剪、降噪、增强、平滑、和锐化中的一个或多个处理。One or more of cropping, noise reduction, enhancement, smoothing, and sharpening is performed on the infrared image.9.一种显示装置,其特征在于,包括:9. A display device, comprising:补光模块,用于对车辆外的目标区域进行红外补光;The fill light module is used for infrared fill light to the target area outside the vehicle;获取模块,用于获取所述目标区域的红外图像的信息,其中,所述目标区域中包括目标对象;an acquisition module, configured to acquire information of an infrared image of the target area, wherein the target area includes a target object;处理模块,用于确定所述目标对象在所述红外图像中的位置;a processing module for determining the position of the target object in the infrared image;发送模块,用于根据所述目标对象在所述红外图像中的位置,在显示区域显示所述目标对象的标注信息。The sending module is configured to display the label information of the target object in the display area according to the position of the target object in the infrared image.10.根据权利要求9所述的装置,其特征在于,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置、所述目标对象的尺寸有关。10 . The device according to claim 9 , wherein the display position of the annotation information is related to the spatial position of the target object, the user's human eye position, and the size of the target object. 11 .11.根据权利要求10所述的装置,其特征在于,所述标注信息的显示尺寸与所述标注信息的显示位置、用户的人眼位置、所述目标对象的尺寸有关。11 . The device according to claim 10 , wherein the display size of the annotation information is related to the display position of the annotation information, the position of the user's human eyes, and the size of the target object. 12 .12.根据权利要求10所述的装置,其特征在于,所述显示区域为增强现实抬头显示器的显示区域,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置有关包括:12 . The device according to claim 10 , wherein the display area is a display area of an augmented reality head-up display, and the display position of the label information is related to the spatial position of the target object and the position of the user's eyes. 13 . include:所述标注信息的显示位置是通过所述用户的第一视线的第一视线确定的;所述第一视线是从所述用户的人眼位置到所述目标对象的空间位置的视线。The display position of the annotation information is determined by the first line of sight of the first line of sight of the user; the first line of sight is the line of sight from the position of the user's human eyes to the spatial position of the target object.13.根据权利要求12所述的装置,其特征在于,所述增强现实抬头显示器的显示区域位于所述车辆的前挡风玻璃,所述标注信息的显示位置是通过所述用户的第一视线的第一视线确定的包括:13 . The device according to claim 12 , wherein the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the marked information is through the first line of sight of the user. 14 . The first sight determinations include:所述标注信息的显示位置是通过所述用户的第一视线与所述车辆的前挡风玻璃的交点确定的。The display position of the marked information is determined by the intersection of the first line of sight of the user and the front windshield of the vehicle.14.根据权利要求10至13任意一项所述的装置,其特征在于,所述目标对象的空间位置与所述目标对象在所述红外图像中的位置、获取所述红外图像的图像采集装置的内参和外参有关。14. The device according to any one of claims 10 to 13, wherein the spatial position of the target object and the position of the target object in the infrared image, and an image acquisition device for acquiring the infrared image The internal and external parameters are related.15.根据权利要求9所述的装置,其特征在于,所述目标对象包括其他车辆、行人、动物中的一个或多个对象。15. The apparatus of claim 9, wherein the target object includes one or more objects among other vehicles, pedestrians, and animals.16.根据权利要求9所述的装置,其特征在于,所述处理模块在用于确定所述目标对象在所述红外图像中的位置之前,还用于:16. The apparatus according to claim 9, wherein before the processing module is used to determine the position of the target object in the infrared image, the processing module is further configured to:对所述红外图像进行裁剪、降噪、增强、平滑、和锐化中的一个或多个处理。One or more of cropping, noise reduction, enhancement, smoothing, and sharpening is performed on the infrared image.17.一种计算设备,其特征在于,包括17. A computing device, characterized in that it comprises处理器,以及processor, and存储器,其上存储有程序指令,所述程序指令当被所述处理器执行时使得所述处理器执行权利要求1至8任意一项所述的显示方法。A memory having program instructions stored thereon, the program instructions, when executed by the processor, cause the processor to execute the display method of any one of claims 1 to 8.18.一种电子装置,其特征在于,包括:18. An electronic device, characterized in that, comprising:处理器,以及接口电路,其中,所述处理器通过所述接口电路访问存储器,所述存储器存储有程序指令,所述程序指令当被所述处理器执行时使得所述处理器执行权利要求1至8任意一项所述的显示方法。A processor, and an interface circuit, wherein the processor accesses a memory through the interface circuit, the memory storing program instructions that, when executed by the processor, cause the processor to perform claim 1 The display method described in any one of to 8.19.一种显示系统,其特征在于,包括:19. A display system, characterized in that it comprises:车机装置,以及与所述车机装置耦合的权利要求17所述的计算设备,或与所述车机装置耦合的权利要求18所述的电子装置。A vehicle-mounted device, and the computing device of claim 17 coupled to the vehicle-mounted device, or the electronic device of claim 18 coupled to the vehicle-mounted device.20.一种计算机可读存储介质,其特征在于,其上存储有程序指令,所述程序指令当被计算机执行时使得所述计算机执行权利要求1至8任意一项所述的显示方法。20 . A computer-readable storage medium, wherein program instructions are stored thereon, and when executed by a computer, the program instructions cause the computer to execute the display method according to any one of claims 1 to 8 .21.一种计算机程序产品,其特征在于,其包括有程序指令,所述程序指令当被计算机执行时使得所述计算机执行权利要求1至8任意一项所述的显示方法。21. A computer program product, characterized in that it comprises program instructions, which, when executed by a computer, cause the computer to execute the display method according to any one of claims 1 to 8.
CN202180001862.4A2021-06-222021-06-22Display method, display device, display equipment and vehiclePendingCN113597617A (en)

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
PCT/CN2021/101446WO2022266829A1 (en)2021-06-222021-06-22Display method and apparatus, device, and vehicle

Publications (1)

Publication NumberPublication Date
CN113597617Atrue CN113597617A (en)2021-11-02

Family

ID=78242910

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202180001862.4APendingCN113597617A (en)2021-06-222021-06-22Display method, display device, display equipment and vehicle

Country Status (2)

CountryLink
CN (1)CN113597617A (en)
WO (1)WO2022266829A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113984087A (en)*2021-11-082022-01-28维沃移动通信有限公司Navigation method, navigation device, electronic equipment and readable storage medium
CN114296239A (en)*2021-12-312022-04-08合众新能源汽车有限公司 Image display method and device for vehicle window
CN114290989A (en)*2021-12-212022-04-08东软睿驰汽车技术(沈阳)有限公司Prompting method, vehicle and computer readable storage medium
CN114999225A (en)*2022-05-132022-09-02海信集团控股股份有限公司Information display method for road object and vehicle
CN115065818A (en)*2022-06-162022-09-16南京地平线集成电路有限公司Projection method and device of head-up display system
CN115272531A (en)*2022-06-302022-11-01中国第一汽车股份有限公司Data display method, system and storage medium
CN115767439A (en)*2022-12-022023-03-07东土科技(宜昌)有限公司Object position display method and device, storage medium and electronic equipment
WO2024104023A1 (en)*2022-11-182024-05-23腾讯科技(深圳)有限公司Information display method and related apparatus

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116800165A (en)*2023-08-212023-09-22江苏和亿智能科技有限公司Automatic frequency conversion energy-saving speed regulation control system

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105799593A (en)*2016-03-182016-07-27京东方科技集团股份有限公司Auxiliary driving device for vehicle
CN110203140A (en)*2019-06-282019-09-06威马智慧出行科技(上海)有限公司Automobile augmented reality display methods, electronic equipment, system and automobile
US20200247651A1 (en)*2019-02-012020-08-06Autoequips Tech Co.,Ltd.Multi-function camera system
CN112896159A (en)*2021-03-112021-06-04宁波均联智行科技股份有限公司Driving safety early warning method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN202713506U (en)*2012-03-222013-01-30奇瑞汽车股份有限公司Anti-dazzling vehicle-mounted night vision system
WO2017209313A1 (en)*2016-05-302017-12-07엘지전자 주식회사Vehicle display device and vehicle
KR102372265B1 (en)*2019-06-052022-03-10한국전자기술연구원Method of determining augmented reality information in vehicle and apparatus thereof
CN110304057A (en)*2019-06-282019-10-08威马智慧出行科技(上海)有限公司 Automobile collision warning, navigation method, electronic device, system, and automobile
CN210139817U (en)*2019-06-282020-03-13威马智慧出行科技(上海)有限公司Automobile augmented reality display system and automobile
CN112714266B (en)*2020-12-182023-03-31北京百度网讯科技有限公司Method and device for displaying labeling information, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105799593A (en)*2016-03-182016-07-27京东方科技集团股份有限公司Auxiliary driving device for vehicle
US20200247651A1 (en)*2019-02-012020-08-06Autoequips Tech Co.,Ltd.Multi-function camera system
CN110203140A (en)*2019-06-282019-09-06威马智慧出行科技(上海)有限公司Automobile augmented reality display methods, electronic equipment, system and automobile
CN112896159A (en)*2021-03-112021-06-04宁波均联智行科技股份有限公司Driving safety early warning method and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113984087A (en)*2021-11-082022-01-28维沃移动通信有限公司Navigation method, navigation device, electronic equipment and readable storage medium
CN114290989A (en)*2021-12-212022-04-08东软睿驰汽车技术(沈阳)有限公司Prompting method, vehicle and computer readable storage medium
CN114296239A (en)*2021-12-312022-04-08合众新能源汽车有限公司 Image display method and device for vehicle window
CN114999225A (en)*2022-05-132022-09-02海信集团控股股份有限公司Information display method for road object and vehicle
CN114999225B (en)*2022-05-132024-03-08海信集团控股股份有限公司Information display method of road object and vehicle
CN115065818A (en)*2022-06-162022-09-16南京地平线集成电路有限公司Projection method and device of head-up display system
CN115272531A (en)*2022-06-302022-11-01中国第一汽车股份有限公司Data display method, system and storage medium
WO2024104023A1 (en)*2022-11-182024-05-23腾讯科技(深圳)有限公司Information display method and related apparatus
CN115767439A (en)*2022-12-022023-03-07东土科技(宜昌)有限公司Object position display method and device, storage medium and electronic equipment

Also Published As

Publication numberPublication date
WO2022266829A1 (en)2022-12-29

Similar Documents

PublicationPublication DateTitle
CN113597617A (en)Display method, display device, display equipment and vehicle
CN107499307B (en)Automatic parking assist apparatus and vehicle including the same
EP2860971B1 (en)Display control apparatus, method, recording medium, and vehicle
CN109572555B (en)Shielding information display method and system applied to unmanned vehicle
US9723243B2 (en)User interface method for terminal for vehicle and apparatus thereof
KR102043060B1 (en)Autonomous drive apparatus and vehicle including the same
KR101942793B1 (en)Driver Assistance Apparatus and Vehicle Having The Same
KR101855940B1 (en)Augmented reality providing apparatus for vehicle and control method for the same
CN114555401A (en)Display system, display device, display method, and mobile device
US20200380257A1 (en)Autonomous vehicle object content presentation systems and methods
KR101843538B1 (en)Driver assistance appratus and method thereof
US20210268961A1 (en)Display method, display device, and display system
CN113022441A (en)Vehicle blind area detection method and device, electronic equipment and storage medium
JP7127565B2 (en) Display control device and display control program
CN110758234A (en)Vehicle lamp projection method and related product
US20220203888A1 (en)Attention calling device, attention calling method, and computer-readable medium
JP7283448B2 (en) Display controller and display control program
CN113581196A (en)Vehicle driving early warning method and device, computer equipment and storage medium
JP7014205B2 (en) Display control device and display control program
KR20170035238A (en)Vehicle and control method for the same
KR102428420B1 (en)Smart Road Information System for Blind Spot Safety
CN207059958U (en)AR optical projection systems for vehicle safe driving
JP7432198B2 (en) Situation awareness estimation system and driving support system
CN111086518B (en)Display method and device, vehicle-mounted head-up display equipment and storage medium
KR20170069096A (en)Driver Assistance Apparatus and Vehicle Having The Same

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20241108

Address after:518129 Huawei Headquarters Office Building 101, Wankecheng Community, Bantian Street, Longgang District, Shenzhen, Guangdong

Applicant after:Shenzhen Yinwang Intelligent Technology Co.,Ltd.

Country or region after:China

Address before:518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before:HUAWEI TECHNOLOGIES Co.,Ltd.

Country or region before:China

TA01Transfer of patent application right

[8]ページ先頭

©2009-2025 Movatter.jp