Disclosure of Invention
An object of the application is to provide a vehicle, a car machine device and a virtual scene control method based on an actual scene thereof, which can perform virtual display on an external actual scene in the vehicle, so that a user can devote himself to the external environment to experience the pleasure of going out of a trip, some bad experiences are avoided, and completely different intelligent car using feelings are brought to the user.
In order to solve the above technical problem, the present application provides a virtual scene control method based on an actual scene, and as an implementation manner, the virtual scene control method based on the actual scene includes:
the method comprises the steps that the vehicle-mounted equipment obtains an actual scene outside a vehicle;
scene elements and element characteristic information contained in the actual scene are obtained;
performing virtualization processing on the scene elements and the element characteristic information to obtain a virtual scene;
the virtual scene is embodied inside the vehicle in a mode of arrangement, playing and/or projection, so that a user can experience an actual scene outside the vehicle through the virtual scene inside the vehicle.
As one embodiment, after the step of representing the virtual scene in the vehicle interior by way of arranging, playing and/or projecting, the method further includes:
when receiving a control operation of a user on a current virtual scene, the vehicle-mounted equipment judges the operation type of the control operation;
and adjusting the color, elements and/or brightness of the virtual scene according to the operation type.
As one embodiment, after the step of representing the virtual scene in the vehicle interior by way of arranging, playing and/or projecting, the method further includes:
the vehicle-mounted equipment acquires experience feedback information of a user on the virtual scene;
analyzing to obtain the favorite preference of the user according to the experience feedback information;
and intelligently and automatically adjusting the virtual scene according to the favorite preference of the user.
As one embodiment, the scene elements include natural scenery, weather, folk-custom style, famous people and/or specialty features, and the element feature information includes mountain river, sun and moon of the natural scenery, snow, rain and sunshine of the weather, celebration and holidays of the folk-custom style, documentary feelings of the famous people and/or delicious delicacies of the specialty features.
As one of the implementation modes, the vehicle-mounted device acquires the actual scene outside the vehicle through the vehicle-mounted camera, the GPS positioning module and/or the user intelligent wearable device connected with the vehicle-mounted device.
As one embodiment, the step of embodying the virtual scene in the vehicle interior by way of arrangement, playing and/or projection specifically includes:
arranging the virtual scene according to the corresponding direction of the actual scene and statically displaying the virtual scene in different directions of the vehicle;
the virtual scene is statically and/or dynamically played in different directions of the vehicle through display equipment according to the direction corresponding to the actual scene;
and/or projecting the virtual scene in different directions of the vehicle in a projection mode according to the direction corresponding to the actual scene.
As one embodiment, after the step of representing the virtual scene in the vehicle interior by way of arranging, playing and/or projecting, the method further includes:
acquiring a virtual element of the virtual scene;
setting a corresponding specific function according to the virtual feature of the virtual element;
and assigning the specific function to the virtual element and generating a function icon for user selection, clicking and/or rotation operation.
The specific functions include, among other things, navigation, telephony, multimedia play, radio, vehicular service, and/or weather forecast.
In order to solve the technical problem, the present application further provides a car-mounted device, as one of the implementation manners, the car-mounted device includes a memory and a processor, the memory stores a computer program, and the processor is configured to execute the computer program, so as to implement the virtual scene control method based on the actual scene.
In order to solve the technical problem, the present application further provides a vehicle, as one embodiment, the vehicle is configured with the in-vehicle device as described above.
According to the vehicle, the vehicle-mounted device and the virtual scene control method based on the actual scene, the vehicle-mounted device acquires the actual scene outside the vehicle, acquires the scene elements and the element characteristic information contained in the actual scene, and virtualizes the scene elements and the element characteristic information to obtain the virtual scene, wherein the virtual scene is embodied inside the vehicle in a layout, playing and/or projection mode so that a user can experience the actual scene outside the vehicle through the virtual scene inside the vehicle. The application can be used for displaying the external actual scene in the vehicle in a virtualization mode, so that a user can devote to the external environment to experience the pleasure of swimming, some bad experiences are avoided, and completely different intelligent vehicle using feelings are brought to the user.
The foregoing description is only an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, the present application may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present application more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Detailed Description
To further illustrate the technical means and effects of the present application for achieving the intended application purpose, the following detailed description is provided with reference to the accompanying drawings and preferred embodiments for specific embodiments, methods, steps, features and effects of the vehicle, the in-vehicle device and the virtual scene control method based on the actual scene according to the present application.
The foregoing and other technical matters, features and effects of the present application will be apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings. While the present application has been described in terms of specific embodiments and examples for achieving the desired objects and objectives, it is to be understood that the invention is not limited to the disclosed embodiments, but is to be accorded the widest scope consistent with the principles and novel features as defined by the appended claims.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a virtual scene control method based on an actual scene according to the present application.
The virtual scene control method based on the actual scene according to the present embodiment may include, but is not limited to, the following steps.
Step S101, the vehicle equipment acquires an actual scene outside a vehicle;
step S102, scene elements and element characteristic information contained in the actual scene are obtained;
step S103, performing virtualization processing on the scene elements and the element characteristic information to obtain a virtual scene;
and step S104, the virtual scene is embodied inside the vehicle in a layout, playing and/or projection mode, so that a user can experience an actual scene outside the vehicle through the virtual scene inside the vehicle.
It should be noted that, after the step of embodying the virtual scene in the vehicle interior by way of arrangement, playing and/or projection, the method may further include:
step 201, when receiving a control operation of a user on a current virtual scene, a vehicle-mounted device judges an operation type of the control operation;
and step 202, adjusting the color, elements and/or brightness of the virtual scene according to the operation type.
It should be noted that, after the step of embodying the virtual scene in the vehicle interior by way of arrangement, playing and/or projection, the present embodiment may further include the following different intelligent control manners:
step 301, the in-vehicle device acquires experience feedback information of a user on the virtual scene;
step 302, analyzing and obtaining the favorite preference of the user according to the experience feedback information;
and 303, intelligently and automatically adjusting the virtual scene according to the favorite preference of the user.
In a specific embodiment, the scene elements comprise natural scenery, weather, folk-custom style, famous persons and/or specials, and the element characteristic information comprises mountain river days and months of the natural scenery, snow and rain and sunshine of the weather, celebration holidays of the folk-custom style, documentary feelings of the famous persons and/or delicious delicacies of the specials.
It is worth mentioning that the vehicle-mounted device obtains the actual scene outside the vehicle through the vehicle-mounted camera, the GPS positioning module and/or the user intelligent wearable device connected with the vehicle-mounted device. Wherein, on-vehicle camera can adopt the artificial intelligence camera of AI, user's intelligence wearing equipment can be for glasses, helmet and the wrist-watch etc. that possess the camera.
It is easy to understand that, through the GPS module, the vehicle can be located, and then the scene element of the current position is obtained, for example, if the vehicle is located on a tiger door bridge, the scene element includes a bridge and river water.
It should be particularly noted that, in order to further improve the user experience, the specific orientation of the actual scene may be simulated in the vehicle for layout display, and the step of embodying the virtual scene in the vehicle interior by way of arrangement, playing and/or projection specifically includes:
the method comprises the steps that in the first mode, the virtual scene is arranged according to the position corresponding to the actual scene and is statically displayed in different positions of a vehicle;
in a second mode, the virtual scene is statically and/or dynamically played in different directions of the vehicle through display equipment according to the direction corresponding to the actual scene;
and/or in a third mode, projecting the virtual scene in different directions of the vehicle in a projection mode according to the direction corresponding to the actual scene.
It is easy to understand that besides playing picture video or projection, audio can also be played, and besides, stereoscopic display can also be performed by means of holographic image.
In this embodiment, after the step of embodying the virtual scene in the vehicle interior by way of arranging, playing and/or projecting, the application may further include:
step 401, acquiring a virtual element of the virtual scene;
step 402, setting a corresponding specific function according to the virtual feature of the virtual element;
and 403, endowing the virtual element with the specific function and generating a function icon for the user to select, click and/or rotate.
In particular, the specific functions described in this embodiment include navigation, telephony, multimedia playing, radio, vehicular services, and/or weather forecast.
The application can be used for displaying the external actual scene in the vehicle in a virtualization mode, so that a user can devote to the external environment to experience the pleasure of swimming, some bad experiences are avoided, and completely different intelligent vehicle using feelings are brought to the user.
Referring to fig. 2, the present application further provides a car-mounted device, as an implementation manner, the car-mounted device includes amemory 20 and aprocessor 21, where thememory 20 stores a computer program, and theprocessor 21 is configured to execute the computer program, so as to implement the steps of the virtual scene control method based on an actual scene as shown in fig. 1 and the implementation manner thereof.
Specifically, theprocessor 21 is configured to obtain an actual scene outside the vehicle;
theprocessor 21 is configured to obtain scene elements and element feature information included in the actual scene;
theprocessor 21 is configured to perform virtualization processing on the scene elements and the element feature information to obtain a virtual scene;
theprocessor 21 is configured to embody the virtual scene inside the vehicle by arranging, playing and/or projecting, so that the user experiences the actual scene outside the vehicle through the virtual scene inside the vehicle.
It should be noted that, in this embodiment, theprocessor 21 is further configured to, when receiving a control operation performed on a current virtual scene by a user, determine an operation type of the control operation, and adjust a color, an element, and/or a brightness of the virtual scene according to the operation type.
It should be noted that, in this embodiment, theprocessor 21 is further configured to obtain experience feedback information of the user on the virtual scene; theprocessor 21 is configured to analyze the experience feedback information to obtain a favorite preference of the user; theprocessor 21 is configured to perform intelligent automatic adjustment on the virtual scene according to the favorite preference of the user.
In a specific embodiment, the scene elements comprise natural scenery, weather, folk-custom style, famous persons and/or specials, and the element characteristic information comprises mountain river days and months of the natural scenery, snow and rain and sunshine of the weather, celebration holidays of the folk-custom style, documentary feelings of the famous persons and/or delicious delicacies of the specials.
It is worth mentioning that the vehicle-mounted device obtains the actual scene outside the vehicle through the vehicle-mounted camera, the GPS positioning module and/or the user intelligent wearable device connected with the vehicle-mounted device. Wherein, on-vehicle camera can adopt the artificial intelligence camera of AI, user's intelligence wearing equipment can be for glasses, helmet and the wrist-watch etc. that possess the camera.
It is easy to understand that, through the GPS module, the vehicle can be located, and then the scene element of the current position is obtained, for example, if the vehicle is located on a tiger door bridge, the scene element includes a bridge and river water.
It should be particularly noted that, in order to further improve the user experience, the layout display may be performed in a specific direction that simulates an actual scene in the vehicle, and specifically includes:
firstly, theprocessor 21 is configured to arrange the virtual scene according to the orientation corresponding to the actual scene and statically display the virtual scene in different orientations of the vehicle;
in a second mode, theprocessor 21 is configured to perform static and/or dynamic playing on the virtual scene in different directions of the vehicle through the display device according to the direction corresponding to the actual scene;
and/or in a third mode, theprocessor 21 is configured to project the virtual scene in different directions of the vehicle in a projection manner according to the direction corresponding to the actual scene.
It is easy to understand that besides playing picture video or projection, audio can also be played, and besides, stereoscopic display can also be performed by means of holographic image.
In this embodiment, theprocessor 21 is further configured to obtain a virtual element of the virtual scene; theprocessor 21 is configured to set a specific function corresponding to the virtual element according to the virtual feature of the virtual element; theprocessor 21 is configured to assign the specific function to the virtual element and generate a function icon for a user to select, click and/or rotate.
In particular, the specific functions described in this embodiment include navigation, telephony, multimedia playing, radio, vehicular services, and/or weather forecast.
The application can be used for displaying the external actual scene in the vehicle in a virtualization mode, so that a user can devote to the external environment to experience the pleasure of swimming, some bad experiences are avoided, and completely different intelligent vehicle using feelings are brought to the user.
With reference to fig. 2, the present application further provides a vehicle, as an embodiment of the vehicle, the vehicle is configured with the vehicle-mounted device as shown in fig. 2 and the embodiment thereof.
The present application will be exemplified below in connection with panoramic images.
Specifically, the method includes the steps that a panoramic camera is used for collecting a panoramic image of the outside of a vehicle; establishing a three-dimensional virtual model through computer graphics and a three-dimensional modeling method; fusing physical image information in the panoramic image into the three-dimensional virtual model to realize virtual-real fusion so as to obtain a virtual fusion scene; establishing an interaction mode between a user and the virtual fusion scene, and building a virtual scene experience interface; and experiencing the virtual scene experience interface through an immersive display device.
In the present embodiment, the three-dimensional modeling method is a hybrid modeling technology based on graphics rendering and images, such as a method that can implement stereo matching according to image content shared between different images to obtain an environment image with depth information as an input, and perform back projection and interpolation operations on the image through an image processing technology to draw a virtual environment scene, and model and render the virtual environment scene by using computer graphics and an image rendering technology.
The three-dimensional virtual model according to the present embodiment is at least one of a three-dimensional building model, a three-dimensional map model, and a three-dimensional route model.
In the embodiment, the mapping relationship between the panoramic image and the three-dimensional model can be extracted from the bottom layer through the panoramic image registration technology, then a multi-resolution and multi-mapping fusion process is established between the three-dimensional model and the panoramic image, and finally the fusion mode is selected according to the interaction and observation viewpoint of the user.
It is worth mentioning that, in the embodiment, an interaction mode between the user and the virtual fusion scene may be established, specifically, interaction is realized through voice recognition and/or gesture recognition. For example, the interaction between the user and the virtual fusion scene may be established by at least one of voice recognition, gesture recognition, a mouse, and a keyboard.
It should be noted that, in this embodiment, a 5G communication network technology may also be used to establish network connection with other vehicles or vehicle devices, for example, an internet of vehicles network, and the 5G communication network technology may be a technology oriented to scene, and the application uses the 5G technology to play a key support role for vehicles (especially intelligent internet automobiles), and simultaneously implements connection of people, objects or vehicles, and may specifically adopt the following three typical application scenarios.
The first is eMBB (enhanced Mobile Broadband), which enables the user experience rate to be 0.1-1 gpbs, the peak rate to be 10 gpbs, and the traffic density to be 10Tbps/km2;
For the second ultra-reliable low-delay communication, the main index which can be realized by the method is that the end-to-end time delay is in the ms (millisecond) level; the reliability is close to 100%;
the third is mMTC (massive machine type communication), the main index which can be realized by the method is the connection number density, 100 ten thousand other terminals are connected per square kilometer, and the connection number density is 10^6/km2。
Through the mode, the characteristics of the super-reliable of this application utilization 5G technique, low time delay combine for example radar and camera etc. just can provide the ability that shows for the vehicle, can realize interdynamic with the vehicle, utilize the interactive perception function of 5G technique simultaneously, and the user can do an output to external environment, and the unable light can detect the state, can also do some feedbacks etc.. Further, the method and the device can also be applied to cooperation of automatic driving, such as cooperation type collision avoidance and vehicle formation among vehicles, so that the vehicle speed is integrally formed and the passing efficiency is improved.
In addition, the communication enhancement automatic driving perception capability can be achieved by utilizing the 5G technology, and the requirements of passengers in the automobile on AR (augmented reality)/VR (virtual reality), games, movies, mobile office and other vehicle-mounted information entertainment and high precision can be met. According to the method and the device, the downloading amount of the 3D high-precision positioning map at the centimeter level can be 3-4 Gb/km, the data volume of the map per second under the condition that the speed of a normal vehicle is limited to 120km/h (kilometer per hour) is 90 Mbps-120 Mbps, and meanwhile, the real-time reconstruction of a local map fused with vehicle-mounted sensor information, modeling and analysis of dangerous situations and the like can be supported.
It should be noted that the method and the device can also be applied to an automatic driving layer, can assist in realizing partial intelligent cloud control on the urban fixed route vehicles by utilizing a 5G technology, and can realize cloud-based operation optimization and remote display and control under specific conditions on unmanned vehicles in parks and ports.
In the present application, the above-mentioned system and method CAN be used in a vehicle system having a vehicle TBOX, i.e. the vehicle is a vehicle system that CAN have a vehicle TBOX, and CAN be further connected to a CAN bus of the vehicle.
In this embodiment, the CAN may include three network channels CAN _1, CAN _2, and CAN _3, and the vehicle may further include one ethernet network channel, where the three CAN network channels may be connected to the ethernet network channel through two in-vehicle networking gateways, for example, where the CAN _1 network channel includes a hybrid power assembly system, where the CAN _2 network channel includes an operation support system, where the CAN _3 network channel includes an electric dynamometer system, and the ethernet network channel includes a high-level management system, the high-level management system includes a human-vehicle-road simulation system and a comprehensive information collection unit that are connected as nodes to the ethernet network channel, and the in-vehicle networking gateways of the CAN _1 network channel, the CAN _2 network channel, and the ethernet network channel may be integrated in the comprehensive information collection unit; the car networking gateway of the CAN _3 network channel and the Ethernet network channel CAN be integrated in a man-car-road simulation system.
Further, the nodes connected to the CAN _1 network channel include: the hybrid power system comprises an engine ECU, a motor MCU, a battery BMS, an automatic transmission TCU and a hybrid power controller HCU; the nodes connected with the CAN _2 network channel are as follows: the system comprises a rack measurement and control system, an accelerator sensor group, a power analyzer, an instantaneous oil consumption instrument, a direct-current power supply cabinet, an engine water temperature control system, an engine oil temperature control system, a motor water temperature control system and an engine intercooling temperature control system; the nodes connected with the CAN _3 network channel are as follows: electric dynamometer machine controller.
The preferable speed of the CAN _1 network channel is 250Kbps, and a J1939 protocol is adopted; the rate of the CAN _2 network channel is 500Kbps, and a CANopen protocol is adopted; the rate of the CAN _3 network channel is 1Mbps, and a CANopen protocol is adopted; the rate of the Ethernet network channel is 10/100Mbps, and a TCP/IP protocol is adopted.
In this embodiment, the car networking gateway supports a 5G technology V2X car networking network, which may also be equipped with an IEEE802.3 interface, a DSPI interface, an eSCI interface, a CAN interface, an MLB interface, a LIN interface, and/or an I2C interface.
In this embodiment, for example, the IEEE802.3 interface may be used to connect to a wireless router to provide a WIFI network for the entire vehicle; the DSPI (provider manager component) interface is used for connecting a Bluetooth adapter and an NFC (near field communication) adapter and can provide Bluetooth connection and NFC connection; the eSCI interface is used for connecting the 4G/5G module and communicating with the Internet; the CAN interface is used for connecting a vehicle CAN bus; the MLB interface is used for connecting an MOST (media oriented system transmission) bus in the vehicle, and the LIN interface is used for connecting a LIN (local interconnect network) bus in the vehicle; the IC interface is used for connecting a DSRC (dedicated short-range communication) module and a fingerprint identification module. In addition, the application can merge different networks by mutually converting different protocols by adopting the MPC5668G chip.
In addition, the vehicle TBOX system, Telematics-BOX, of the present embodiment is simply referred to as a vehicle TBOX or a Telematics.
Telematics is a synthesis of Telecommunications and information science (information) and is defined as a service system that provides information through a computer system, a wireless communication technology, a satellite navigation device, and an internet technology that exchanges information such as text and voice, which are built in a vehicle. In short, the vehicle is connected to the internet (vehicle networking system) through a wireless network, and various information necessary for driving and life is provided for the vehicle owner.
In addition, Telematics is a combination of wireless communication technology, satellite navigation system, network communication technology and vehicle-mounted computer, when a fault occurs during vehicle running, the vehicle is remotely diagnosed by connecting a service center through wireless communication, and the computer built in the engine can record the state of main parts of the vehicle and provide accurate fault position and reason for maintenance personnel at any time. The vehicle can receive information and check traffic maps, road condition introduction, traffic information, safety and public security services, entertainment information services and the like through the user communication terminal, and in addition, the vehicle of the embodiment can be provided with electronic games and network application in a rear seat. It is easy to understand that, this embodiment provides service through Telematics, can make things convenient for the user to know traffic information, the parking stall situation that closes on the parking area, confirms current position, can also be connected with the network server at home, in time knows electrical apparatus running condition, the safety condition and guest's condition of visiting etc. at home.
The vehicle according to this embodiment may further include an Advanced Driver Assistance System (ADAS) that collects environmental data inside and outside the vehicle at the first time using the various sensors mounted on the vehicle, and performs technical processing such as identification, detection, and tracking of static and dynamic objects, so that a Driver can recognize a risk that may occur at the fastest time, thereby attracting attention and improving safety. Correspondingly, the ADAS of the present application may also employ sensors such as radar, laser, and ultrasonic sensors, which can detect light, heat, pressure, or other variables for monitoring the state of the vehicle, and are usually located on the front and rear bumpers, side view mirrors, the inside of the steering column, or on the windshield of the vehicle. It is obvious that various intelligent hardware used by the ADAS function can access the V2X car networking network by means of an ethernet link to implement communication connection and interaction.
The host computer of the present embodiment vehicle may comprise suitable logic, circuitry, and/or code that may enable operation and/or functional operation of the five layers above the OSI model (Open System Interconnection, Open communication systems Interconnection reference model). Thus, the host may generate and/or process packets for transmission over the network, and may also process packets received from the network. At the same time, the host may provide services to a local user and/or one or more remote users or network nodes by executing corresponding instructions and/or running one or more applications. In various embodiments of the present application, the host may employ one or more security protocols.
In the present application, the network connection used to implement the V2X car networking network may be a switch, which may have AVB functionality (Audio Video brightening, meeting the IEEE802.1 set of standards), and/or include one or more unshielded twisted pair wires, each of which may have an 8P8C module connector.
In a preferred embodiment, the V2X vehicle networking network specifically comprises a vehicle body control module BCM, a power bus P-CAN, a vehicle body bus I-CAN, a combination instrument CMIC, a chassis control device and a vehicle body control device.
In this embodiment, the body control module BCM may integrate the functions of the car networking gateway to perform signal conversion, message forwarding, and the like between different network segments, i.e., between the power bus P-CAN and the body bus I-CAN, for example, if a controller connected to the power bus needs to communicate with a controller connected to the body bus I-CAN, the body control module BCM may perform signal conversion, message forwarding, and the like between the two controllers.
The power bus P-CAN and the vehicle body bus I-CAN are respectively connected with a vehicle body control module BCM.
The combination instrument CMIC is connected with a power bus P-CAN, and the combination instrument CMIC is connected with a vehicle body bus I-CAN. Preferably, the combination meter CMIC of the present embodiment is connected to different buses, such as a power bus P-CAN and a vehicle body bus I-CAN, and when the combination meter CMIC needs to acquire controller information that is hung on any bus, it is not necessary to perform signal conversion and message forwarding through a vehicle body control module BCM, so that gateway pressure CAN be reduced, network load CAN be reduced, and the speed of acquiring information by the combination meter CMIC CAN be increased.
The chassis control device is connected with the power bus P-CAN. The vehicle body control device is connected with a vehicle body bus I-CAN. In some examples, the chassis control device and the body control device CAN respectively broadcast data such as information to the power bus P-CAN and the body bus I-CAN, so that other vehicle-mounted controllers and other devices hung on the power bus P-CAN or the body bus I-CAN CAN acquire the broadcast information, and communication between the vehicle-mounted devices such as different controllers is realized.
In addition, the V2X car networking network of the vehicle of the embodiment may use two CAN buses, i.e., a power bus P-CAN and a car body bus I-CAN, and use the car body control module BCM as a gateway, and a structure that the combination meter CMIC is connected to both the power bus P-CAN and the car body bus I-CAN, so that an operation that information of the chassis control device or the car body control device is forwarded to the combination meter CMIC through the gateway when the combination meter CMIC is hung on one of the two buses in the conventional manner CAN be omitted, thereby reducing the pressure of the car body control module BCM as a gateway, reducing network load, and more conveniently sending information of vehicle-mounted devices hung on the plurality of buses, e.g., the power bus P-CAN and the car body bus I-CAN, to the combination meter CMIC for display and with strong information transmission real-time.
Finally, the application can implement the following user scenarios:
for example, the scene-type visual experience is carried out in a vehicle, a circular open-air scene is generated on a ceiling in the vehicle, natural scenery, weather change and the like can be displayed in the scene, the illumination state of the sky can be changed according to the change of time information, and the brightness of the sky can be automatically adjusted through the movement of fingers; accordingly, the car machine equipment corresponds to the current real time, and the user can achieve immersive car machine operation experience, specifically referring to the scenes and effects shown in fig. 3 and 4, fig. 3 shows the change process from the sun to the moon according to the time information and the pre-stored sequence frame animation according to different time points; fig. 4 shows that the user can adjust the brightness of the sky according to his own habit through dynamic gesture instructions such as left-right movement of touching the screen with fingers, so as to adapt to the user's visual habit.
Through the mode, the user can have more novel aesthetic feeling to the virtual interface inside the vehicle, the scene type experience is carried out through a brand-new display platform, the space feeling of the interface is enhanced, and the construction and the fusion of various visual senses of the user are achieved.
Although the present application has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the application, and all changes, substitutions and alterations that fall within the spirit and scope of the application are to be understood as being included within the following description of the preferred embodiment.