Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
In the current society, an automobile is used as a transportation means, and more new technologies are provided while the travel demands of users are met, so that the driving experience of the users is improved. A head-up display (HUD) may project important driving data on a windshield to form a projected virtual image picture that varies in real time with the variation of the driving data, so that a user can see real-time information of driving without lowering his head.
In the prior art, because the display styles of the virtual images projected by the head-up display device HUD are different in different driving modes, the overall page layout of the projected virtual images is also different, so that when the driving modes are switched, the page layout of the virtual images projected by the HUD can be integrally switched along with the switching of the driving modes, the projected virtual images are discontinuous, and the images watched by the user are unsmooth and unstable. In view of this problem, embodiments of the present disclosure provide a screen display method, which is described below in connection with specific embodiments.
Fig. 1 is a flowchart of a method for displaying a picture according to an embodiment of the present disclosure. The method may be performed by a picture display device, which may be implemented in software and/or hardware, which may be configured in an electronic device, such as a server or a terminal, wherein the terminal specifically comprises an electric car, a fuel car, a hybrid car, etc., which may be a car control system in a vehicle, for example. In addition, the method can be applied to application scenes of picture display, and it can be understood that the picture display method provided by the embodiment of the disclosure can also be applied to other scenes.
The following describes a picture display method shown in fig. 1, which includes the following specific steps:
S101, in a current driving mode, a virtual image picture projected by the head-up display device comprises target information, and the target information is presented as a first display mark.
The vehicle has a plurality of driving modes, such as a cruising mode, an intersection navigation mode, an intelligent driving mode and the like, and in the current driving mode of the vehicle, the vehicle control system acquires a virtual image picture projected by the head-up display device, wherein the virtual image picture is a virtual image picture visible to human eyes formed by reflecting image light carrying display information (such as vehicle driving data) through a vehicle windshield, and the virtual image picture comprises target information which is presented as a first display mark.
Optionally, the target information includes at least one of: vehicle information, steering information, road information.
S102, when the current driving mode is switched to a target driving mode, the target information is switched from a first display identifier to a second display identifier.
When the vehicle is switched from the current driving mode to the target driving mode (the current driving mode and the target driving mode are two different driving modes), the target information is switched from the first display mark to the second display mark, and the second display mark is a display form of a virtual image picture projected by the target information on the head-up display device in the target driving mode.
Optionally, the switching of the target information from the first display identifier to the second display identifier may be implemented in a smooth transition manner, for example, including: and performing perspective transformation on the first display identifier of the target information to obtain a second display identifier of the target information, so that the first display identifier is smoothly connected to the second display identifier.
Specifically, perspective transformation is performed on a first display identifier of target information, namely, the first display identifier is gradually transparent, a second display identifier is gradually clear, and finally, a second display identifier of the target information is obtained, so that the first display identifier is smoothly connected to the second display identifier.
According to the embodiment of the disclosure, the virtual image picture projected by the head-up display device comprises the target information in the current driving mode, the target information is presented as the first display mark, and the presentation form of the target information in the current driving mode is defined; when the current driving mode is switched to the target driving mode, the target information is switched from the first display identification to the second display identification, and compared with the whole page layout in the prior art, only the display mode of the target information is changed.
In some embodiments, in the current driving mode, the virtual image frame projected by the head-up display device further includes resident information; when the current driving mode is switched to the target driving mode, the display of the resident information in the virtual image picture is unchanged.
In a current driving mode of the vehicle, the vehicle control system acquires a virtual image picture projected by the head-up display device, wherein the virtual image picture is formed by reflecting display data through a windshield of the vehicle, the virtual image picture comprises resident information, and when the vehicle is switched from the current driving mode to a target driving mode (the current driving mode and the target driving mode are two different driving modes), the display of the resident information in the virtual image picture is unchanged.
Optionally, the resident information includes at least one of: current speed information of the vehicle, speed limit information of a current road on which the vehicle runs, turn signal light information of the vehicle, warning information of vehicle faults and warning light information which does not accord with the running specification of the vehicle.
Specifically, as shown in fig. 4, the resident information is current speed information of the vehicle, speed limit information of a current road on which the vehicle runs, and a distance between the vehicle and the intersection, the current speed of the vehicle is 58KM/H, the speed limit of the current road on which the vehicle runs is 80KM/H, and the distance between the vehicle and the intersection is 2.5KM.
In the embodiment of the disclosure, in the current driving mode, the virtual image picture projected by the head-up display device also comprises resident information; when the current driving mode is switched to the target driving mode, the display of the resident information in the virtual image picture is unchanged, and compared with the switching of the whole page layout in the prior art, the display of the resident information of the virtual image picture projected by the head-up display device is unchanged in the embodiment, the whole page layout is not required to be switched, the virtual image picture is more coherent and smooth, the display stability of the virtual image picture is further improved on the visual effect, and the experience of a user is improved.
In some embodiments, the same target information has at least two user interface display identifiers; the display of the user interface display identifier is determined by a driving mode; any two of the at least two user interface display identifiers differ in a property, the property comprising at least one of shape, color, and number.
The same target information has at least two user interface display identifiers, the display of the user interface display identifiers is determined by a driving mode, and the driving mode comprises a cruising mode, an intersection navigation mode, an intelligent driving mode and the like; any two of the at least two user interface display identifiers differ in a property that includes at least one of a shape, a color, and a quantity. If the target information is road information, the road information has two user interface display identifiers, one is a 3D road display identifier shown in fig. 2, and the other is a standard map 2D road display identifier shown in fig. 3, wherein the color of the 3D road display identifier is black, the number of lanes of the road is consistent with the number of actual lanes, the color of the 2D road display identifier is green, and the road does not display the number of specific lanes; when the target information is steering information, the steering information has two user interface display identifiers, namely a 3D dynamic icon display identifier and a standard map 2D steering display identifier, wherein the color of the 3D dynamic icon display identifier is blue, the shape is an arrow displayed in an augmented reality (Augmented Reality, AR) mode, the number of the AR arrows is a preset number, specifically, the preset number can be 5, the color of the 2D steering display identifier is white, the shape is smaller than the AR arrows, and the number is 1.
Optionally, when the target information is vehicle information, the user interface display identifier of the vehicle information at least comprises a navigation arrow, a dynamic icon and a vehicle model; the driving mode at least comprises a cruising mode, an intersection navigation mode and an intelligent driving mode; the display of the user interface display identifier is determined by a driving mode, comprising: in the cruise mode, the user interface display is identified as the navigation arrow; in the crossing navigation mode, the user interface display mark is the dynamic icon; in the intelligent driving mode, the user interface display mark is the vehicle model; any two of the navigation arrow, the dynamic icon, and the vehicle model differ in a property, the property including at least one of a shape, a color, and a number.
Specifically, when the target information is vehicle information, the user interface display identifier of the vehicle information at least comprises a navigation arrow, a dynamic icon and a vehicle model, wherein the AR arrow and the vehicle model which are close to the ground (closest to the vehicle) in the navigation arrow and the dynamic icon can represent the current position of the vehicle; the driving mode at least comprises a cruising mode, an intersection navigation mode and an intelligent driving mode, wherein the application scene of the cruising mode is that a driver does not know a route and starts navigation; the application scene of the crossing navigation mode is that the navigation cut vehicle is started to change the running direction of the vehicle under the condition of a preset distance from the crossing, and the changing of the running direction of the vehicle can be concretely left turn, right turn and turning around; the application scenario of the intelligent driving mode is a situation that the driver knows the route and does not need to navigate. The display of the user interface display identifier is determined by the driving mode including: in cruise mode, the user interface display of vehicle information is identified as a navigational arrow (including a TBT arrow, an arrow in a travelable lane); in the crossing navigation mode, the user interface display mark of the vehicle information is a dynamic icon; in the intelligent driving mode, the user interface display mark of the vehicle information is a vehicle model; the navigation arrow, the dynamic icon and the vehicle model have different attributes, wherein the attributes comprise at least one of shape, color and number, and the navigation arrow, the dynamic icon and the vehicle model have different shapes, the color of the navigation arrow is white, the shape is an arrow, the arrow is smaller than the AR arrow, and the number of the arrows is 1; the color of the dynamic icon is blue, the shape is AR arrow, the arrow is larger than the navigation arrow, the number of the arrow is preset number, and the preset number can be 5; the vehicle model is black and white in color, the vehicle model is in shape, and the number of the vehicle models is 1.
The embodiment of the disclosure describes that the same target information has at least two user interface display identifiers; the display of the user interface display identifier is determined by the driving mode; any two user interface display identifiers in the at least two user interface display identifiers are different in attribute, wherein the attribute comprises at least one of shape, color and number, the display styles of target information in virtual image frames projected by head-up display devices HUD in different driving modes are different, the achieved reminding key points and reminding effects are different, a user can conveniently prepare to acquire information quickly, and the viewing experience of the user is improved in visual effect.
In some embodiments, when the current driving mode is switched to a target driving mode, the target information is switched from a first display identifier to a second display identifier, including: when the current driving mode is a cruising mode, the target driving mode is an intersection navigation mode, and the second display mark is a dynamic icon when the current driving mode is switched from the cruising mode to the intersection navigation mode; performing perspective transformation on the navigation arrow to obtain an AR target arrow; and dynamically processing the target arrow to obtain a dynamic icon so that the target information is smoothly connected into the dynamic icon by the navigation arrow.
Fig. 4 is a schematic view of a vehicle cruising mode, fig. 5 is a schematic view of a vehicle cruising mode switching intersection navigation mode, fig. 6 is a schematic view of a vehicle intersection navigation mode, and as shown in fig. 4-6, when a current driving mode of the vehicle is the cruising mode, a target driving mode is the intersection navigation mode, and when the vehicle is switched from the cruising mode to the intersection navigation mode, a second display mark of vehicle information in the intersection navigation mode is a dynamic icon; the switching process of the vehicle information from the first display identifier (navigation arrow) to the second display identifier (dynamic icon) is as follows: performing perspective transformation on the navigation arrow, namely, gradually increasing the transparency of the navigation arrow from 0% to 100% until the navigation arrow disappears, and decreasing the transparency of the AR arrow, namely, decreasing the transparency of the AR arrow from 100% to 0%, namely, gradually clearing the AR arrow, so as to obtain the AR arrow; carrying out dynamic processing on target arrows to obtain a dynamic icon, wherein the dynamic icon comprises a preset number of AR arrows, the AR arrow closest to the vehicle is the current position of the vehicle, the preset number of target arrows are firstly gradually flown out, a virtual image picture shown in figure 6 is formed when the vehicle turns right at an intersection, the purpose that target information is smoothly linked into the dynamic icon by a navigation arrow is achieved, namely the switching process is a very smooth dynamic process, a specific smooth linking mode comprises that the target information flies out from a first preset direction in a first preset time and flies in from a second preset direction in a second preset time, the first preset time and the second preset time can be set manually or can be set by a system, the first preset time and the second preset time can be the same or different, and the first preset time can be 3s, the second preset time can be 3s or 5s and the like; the first preset direction and the second preset direction may be set manually or may be set by a system, and the first preset direction and the second preset direction may be the same or different, and exemplary, the first preset direction may be an upper direction, the second preset direction may be an upper direction, a lower direction, or the like. It can be understood that when the vehicle is switched from the cruising mode to the crossing navigation mode, the road information is switched from the standard map 2D road display identifier to the 3D road display identifier; and the steering information is switched from the standard map 2D steering display identifier to the 3D dynamic icon display identifier. The following road information, steering information and vehicle information are smoothly switched along with the switching of the driving mode, which is not described in detail in this embodiment.
In some embodiments, when the current driving mode is switched to a target driving mode, the target information is switched from a first display identifier to a second display identifier, including: when the current driving mode is an intersection navigation mode, the target driving mode is an intelligent driving mode, and the intersection navigation mode is switched to the intelligent driving mode, the second display mark is a vehicle model; and performing perspective transformation on the dynamic icons to obtain a vehicle model so that the target information is smoothly linked into the vehicle model by the dynamic icons.
When the current driving mode is an intersection navigation mode, the target driving mode is an intelligent driving mode, and the intersection navigation mode is switched to the intelligent driving mode, the second display mark is a vehicle model; and performing perspective transformation on the dynamic icon, namely, gradually increasing the transparency of the AR arrow in the dynamic icon from 0% to 100%, until the AR arrow disappears, and reducing the transparency of the vehicle model, namely, reducing the transparency of the vehicle model from 100% to 0%, namely, gradually clearing the vehicle model, so as to obtain the vehicle model, and smoothly connecting the target information from the dynamic icon to the vehicle model.
Optionally, in a case where the head-up display device projects at least two virtual images, the at least two virtual images include a first virtual image and a second virtual image, and an imaging distance of the first virtual image is smaller than an imaging distance of the second virtual image; the target information is smoothly linked into the vehicle model by the dynamic icon, and the method comprises the following steps: determining a first position of a two-dimensional image corresponding to a target arrow in the dynamic icon in the first virtual image; determining a second position of a target arrow in the dynamic icon in the second virtual image according to the first position; and smoothly linking the dynamic icons to the vehicle model at the second position.
In particular, in the case where at least two virtual images are projected by the head-up display apparatus, the at least two virtual images include a first virtual image and a second virtual image, wherein an imaging distance of the first virtual image is smaller than an imaging distance of the second virtual image. The smooth linking of the target information into the vehicle model by the dynamic icon comprises the following steps: determining a first position of a two-dimensional image corresponding to an AR arrow in the dynamic icon (namely, the AR arrow is in a plane form) in a first virtual image, and switching the AR arrow from a second virtual image to the first virtual image; determining a second position of an AR arrow in the dynamic icon in the second virtual image according to the first position; and smoothly connecting the dynamic icons into a vehicle model at the second position, and flying the dynamic icons out of the third preset direction within the third preset time of the AR arrow, and flying the dynamic icons in the fourth preset direction within the fourth preset time of the vehicle model.
In some embodiments, when the current driving mode is switched to a target driving mode, the target information is switched from a first display identifier to a second display identifier, including: when the current driving mode is an intelligent driving mode, the target driving mode is an intersection navigation mode, and the second display mark is a dynamic icon when the intelligent driving mode is switched to the intersection navigation mode; and performing perspective transformation on the vehicle model to obtain a dynamic icon so that the target information is smoothly connected into the dynamic icon by the vehicle model.
Optionally, in a case where the head-up display device projects at least two virtual images, the at least two virtual images include a first virtual image and a second virtual image, and an imaging distance of the first virtual image is smaller than an imaging distance of the second virtual image; the target information is smoothly linked into the dynamic icon by the vehicle model, and the method comprises the following steps: determining a first position of the vehicle model in the first virtual image; determining a second position of the vehicle model in the second virtual image according to the first position; and smoothly linking the vehicle model into the dynamic icon at the second position.
Fig. 7 is a schematic diagram of a vehicle intelligent driving mode, fig. 8 and 9 are schematic diagrams of a vehicle intelligent driving mode switching intersection navigation mode, fig. 10 is a schematic diagram of a vehicle intersection navigation mode, as shown in fig. 7-10, when a current driving mode is the intelligent driving mode, a target driving mode is the intersection navigation mode, and the vehicle intelligent driving mode is switched to the intersection navigation mode, a second display identifier is a dynamic icon, and a switching process of vehicle information from a first display identifier (a vehicle model) to a second display identifier (a dynamic icon) is as follows: and determining a third position of the vehicle model on the first virtual image, switching the vehicle model from the second virtual image to the first virtual image according to the third position, and determining a fourth position of the third position on the second virtual image according to the third position of the vehicle model on the first virtual image when the intelligent driving mode is hidden, and switching the vehicle model into an AR arrow at the fourth position.
In some embodiments, when the current driving mode is switched to a target driving mode, the target information is switched from a first display identifier to a second display identifier, including: when the current driving mode is an intelligent driving mode, the target driving mode is a cruising mode, and the second display mark is a navigation arrow when the current driving mode is switched from the intelligent driving mode to the cruising mode; and performing perspective transformation on the vehicle model to obtain the navigation arrow, so that the target information is smoothly linked into the navigation arrow by the vehicle model.
Fig. 11 is a schematic view of a vehicle intelligent driving mode, fig. 12 is a schematic view of a vehicle intelligent driving mode switching cruise mode, fig. 13 is a schematic view of a vehicle cruise mode, and as shown in fig. 11-13, when the current driving mode is the intelligent driving mode, the target driving mode is the cruise mode, and the second display mark is a navigation arrow when switching from the intelligent driving mode to the cruise mode; the switching process of the vehicle information from the first display identifier (vehicle model) to the second display identifier (navigation arrow) is as follows: and performing perspective transformation on the vehicle model, namely, gradually increasing the transparency of the vehicle model from 0% to 100%, until the vehicle model disappears, and reducing the transparency of the navigation arrow, namely, gradually increasing the transparency of the navigation arrow from 100% to 0%, namely, gradually clearing the navigation arrow, so as to obtain the navigation arrow, and smoothly connecting the target information with the vehicle model to form the navigation arrow.
It can be understood that, in the same virtual image frame projected by the HUD, only one display identifier of the target information is displayed, that is, different user interface display identifiers of the same target information are mutually exclusive displayed in the same virtual image frame.
According to the embodiment of the disclosure, after the driving mode is switched, the connection transition of the target information in the form of the animation for switching the icon is realized through perspective transformation, so that the same target information keeps continuity in different driving modes, and the stability of the virtual image picture projected by the HUD is improved.
Fig. 14 is a schematic diagram of a combined display of a vehicle intelligent driving mode and an intersection navigation mode, where, as shown in fig. 14, in a virtual image projected by a HUD, a current two-dimensional map is larger in proportion, a display area is smaller, and a front intersection cannot be displayed, but a vehicle model is already switched to an AR arrow, so that a navigation arrow and a vehicle model, or a navigation arrow and an AR arrow, can be displayed on the virtual image projected by the HUD at the same time, that is, the consistency of information is improved by combining two display identifiers. An arrow is commonly used to reduce the recurrence of the same information, such as by merging the TBT navigation arrow with the AR arrow representing the steering guide at the intersection.
Fig. 15 is a schematic structural diagram of a picture display device according to an embodiment of the disclosure. The screen display device may be a terminal as described in the above embodiments, or the screen display device may be a part or assembly in the terminal. The screen display device provided in the embodiment of the present disclosure may execute the processing flow provided in the embodiment of the screen display method, as shown in fig. 15, the screen display device 1500 includes: a projection module 151 and a switching module 152; the projection module 151 is configured to include target information in a virtual image frame projected by the head-up display device in the current driving mode, where the target information is presented as a first display identifier; and the switching module 152 is configured to switch the target information from the first display identifier to the second display identifier when the current driving mode is switched to the target driving mode.
Optionally, the projection module 151 is further configured to, in the current driving mode, further include resident information in a virtual image frame projected by the head-up display device; when the current driving mode is switched to the target driving mode, the display of the resident information in the virtual image picture is unchanged.
Optionally, the resident information includes at least one of: current speed information of the vehicle, speed limit information of a current road on which the vehicle runs, turn signal light information of the vehicle, warning information of vehicle faults and warning light information which does not accord with the running specification of the vehicle.
Optionally, the target information includes at least one of: vehicle information, steering information, road information; the switching module 152 is further configured to perform perspective transformation on the first display identifier of the target information to obtain a second display identifier of the target information, so that the first display identifier is smoothly connected to the second display identifier.
Optionally, the same target information has at least two user interface display identifiers; the display of the user interface display identifier is determined by a driving mode; any two of the at least two user interface display identifiers differ in a property, the property comprising at least one of shape, color, and number.
Optionally, when the target information is vehicle information, the user interface display identifier of the vehicle information at least comprises a navigation arrow, a dynamic icon and a vehicle model; the driving mode at least comprises a cruising mode, an intersection navigation mode and an intelligent driving mode; the display of the user interface display identifier is determined by a driving mode, comprising: in the cruise mode, the user interface display is identified as the navigation arrow; in the crossing navigation mode, the user interface display mark is the dynamic icon; in the intelligent driving mode, the user interface display mark is the vehicle model; any two of the navigation arrow, the dynamic icon, and the vehicle model differ in a property, the property including at least one of a shape, a color, and a number.
Optionally, the switching module 152 is further configured to, when the current driving mode is a cruise mode, the target driving mode is an intersection navigation mode, and the second display identifier is a dynamic icon when switching from the cruise mode to the intersection navigation mode; performing perspective transformation on the navigation arrow to obtain a target arrow displayed in an augmented reality mode; and dynamically processing the target arrow to obtain a dynamic icon so that the target information is smoothly connected into the dynamic icon by the navigation arrow.
Optionally, the switching module 152 is further configured to, when the current driving mode is an intersection navigation mode, the target driving mode is an intelligent driving mode, and the second display identifier is a vehicle model when switching from the intersection navigation mode to the intelligent driving mode; and performing perspective transformation on the dynamic icons to obtain a vehicle model so that the target information is smoothly linked into the vehicle model by the dynamic icons.
Optionally, in a case where the head-up display device projects at least two virtual images, the at least two virtual images include a first virtual image and a second virtual image, and an imaging distance of the first virtual image is smaller than an imaging distance of the second virtual image; the switching module 152 is further configured to determine a first position of a two-dimensional image corresponding to a target arrow in the dynamic icon in the first virtual image; determining a second position of a target arrow in the dynamic icon in the second virtual image according to the first position; and smoothly linking the dynamic icons to the vehicle model at the second position.
Optionally, the switching module 152 is further configured to, when the current driving mode is an intelligent driving mode, the target driving mode is a cruise mode, and the second display identifier is a navigation arrow when switching from the intelligent driving mode to the cruise mode; and performing perspective transformation on the vehicle model to obtain the navigation arrow, so that the target information is smoothly linked into the navigation arrow by the vehicle model.
The image display device of the embodiment shown in fig. 15 may be used to implement the technical solution of the above-mentioned image display method embodiment, and its implementation principle and technical effects are similar, and will not be described herein again.
Fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. The electronic device may be a terminal as described in the above embodiments. The electronic device provided in the embodiment of the present disclosure may execute the processing flow provided in the embodiment of the screen display method, as shown in fig. 16, where the electronic device 160 includes: memory 161, processor 162, computer programs, and communication interface 163; wherein the computer program is stored in the memory 161 and configured to be executed by the processor 162 for performing the picture display method as described above.
In addition, the embodiment of the present disclosure also provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor to implement the screen display method described in the above embodiment.
Furthermore, the embodiments of the present disclosure also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a picture display method as described above.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
in a current driving mode, a virtual image picture projected by the head-up display device comprises target information, wherein the target information is presented as a first display mark;
and when the current driving mode is switched to a target driving mode, the target information is switched from the first display identifier to the second display identifier.
In addition, the electronic device may also perform other steps in the screen display method as described above.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.