Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an end cloud user interaction method, an end cloud user interaction system, corresponding equipment and a storage medium, which can improve the instantaneity, sensitivity and stability of an end cloud integrated operation system and solve the problems of fragmentation of terminal equipment and application.
In a first aspect of the present invention, an end cloud user interaction method is provided, and a control information instruction channel, an end cloud collaborative rendering channel and/or an audio/video data interaction channel are established between a terminal and a cloud, the method includes:
the terminal uploads a man-machine interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, and the cloud end updates and distributes a message list based on the received man-machine interaction instruction, system event and/or message, and the cloud end invokes a terminal kernel program through the control information instruction channel to complete specific operation of the terminal; and/or
Performing end cloud collaborative graphics rendering through the end cloud collaborative rendering channel and outputting graphics at a terminal, wherein the cloud end updates graphics rendering data of the terminal through the end cloud collaborative rendering channel; and/or
And the terminal plays the cloud audio and video resources through the audio and video data interaction channel and/or uploads the acquired audio and video to the cloud.
In an embodiment, the end cloud collaboration mode of the end cloud collaborative graphics rendering and/or the multimedia collaborative processing of the audio and video data interaction are determined dynamically or statically at least according to the terminal device configuration and the network environment.
In a second aspect of the present invention, an end cloud user interaction system is provided, and a control information instruction channel, an end cloud collaborative rendering channel and/or an audio/video data interaction channel are established between a terminal and a cloud, where the system includes:
the control information instruction interaction module is used for enabling the terminal to upload a man-machine interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, updating and distributing a message list based on the received man-machine interaction instruction, the system event and/or the message by the cloud end, and calling a terminal kernel program by the cloud end through the control information instruction channel to finish specific operation of the terminal; and/or
The terminal cloud collaborative rendering module is used for performing terminal cloud collaborative graphics rendering through the terminal cloud collaborative rendering channel and outputting graphics at the terminal, wherein the cloud end updates graphics rendering data of the terminal through the terminal cloud collaborative rendering channel; and/or
And the audio and video data interaction module is used for enabling the terminal to play cloud audio and video resources through the audio and video data interaction channel and/or uploading the acquired audio and video to the cloud.
In a third aspect of the invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the end cloud user interaction method according to the first aspect of the invention.
In a fourth aspect of the present invention, there is provided a computer device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, wherein the steps of the end cloud user interaction method according to the first aspect of the present invention are implemented when the computer program is executed by the processor.
The invention provides an end cloud integrated user interaction method and system, which can meet the use requirements of different types of equipment and scenes and reduce the cost of an intelligent terminal to the greatest extent. The end cloud system interaction mainly comprises the following steps: the method comprises the steps of controlling information/instruction interaction, terminal cloud collaborative rendering and multimedia collaborative processing, and realizing terminal cloud collaborative interaction specifically by establishing three network channels among the terminal clouds. The specific functions of the end cloud collaborative rendering and the multimedia collaborative processing adopt a distributed collaborative mode, tasks among the end clouds are flexibly distributed, the influence of network communication on a system is reduced, and therefore the instantaneity, the sensitivity and the stability of the end cloud integrated operation system are improved. Due to the distributed collaboration mode, a traditional operating system can be divided into a very simple terminal closely related to bottom hardware and a cloud terminal closely related to a user, namely, the system adopts a thin end and cloud terminal mode, functions such as microkernel and peripheral equipment driving, multimedia and the like are put into an intelligent equipment terminal layer, other parts of the operating system such as file management, application management, rendering engine and the like serve as a cloud service layer, the terminal focuses on unchanged peripheral equipment, and the user completes interaction through transmission of control instructions and rendering instructions between end clouds. Under the condition that the terminal is only responsible for the basic operating system functions of input and output, the requirement on system resources is low, and the system can be widely adapted to be operated on various types of hardware equipment.
Other features and advantages of the present invention will become more apparent from the following detailed description of embodiments of the present invention, which is to be read in connection with the accompanying drawings.
Detailed Description
Embodiments and examples of the present invention will be described in detail below with reference to the accompanying drawings.
The scope of applicability of the present invention will become apparent from the detailed description given hereinafter. It should be understood, however, that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only.
Fig. 3 illustrates a schematic diagram of an end cloud integrated operating system architecture according to an embodiment of the present invention. The terminal cloud integrated operating system is divided into two parts, namely a terminal and a cloud. The terminal can only keep close to a hardware layer of the terminal and realize the function of minimizing an operating system, and is used as a very simple terminal; the cloud is responsible for the remaining user-related functions of the operating system, including all user data, applications, etc. that need to be protected. The end side of the operating system is divided into a bottom micro-kernel layer and a system service layer; the cloud side is divided into a virtual device layer, an application framework layer and an application layer.
The terminal can adopt a very simple operating system which only provides basic functions such as basic hardware resource management, input/output and the like, and specifically comprises: the microkernel layer realizes a basic OS function part and is responsible for scheduling and managing terminal hardware resources (CPU, memory, IO and the like), and is a simplified version of the kernel; the system service/driving layer is mainly responsible for the input/output function of the terminal, and relates to an input/output system related to an OS, a file system after local clipping, a communication protocol and a terminal peripheral driving part, and operates on the microkernel layer to provide basic system service, and each module is communicated through the microkernel; the microkernel is used as a basic operating system kernel, and modules with different functions are integrated in an upper system service/driving layer, so that flexible adaptation of terminals with different types and specifications is realized; and the method is responsible for driving terminal hardware equipment and constructing and maintaining the most basic operating system operating environment of the terminal.
The cloud end can realize most functions of a traditional operation application framework layer and an application layer, and specifically comprises the following steps: the virtual equipment layer is adopted to abstract the terminal equipment, the hardware interface details of a specific terminal platform are hidden, a virtual hardware platform is provided for an operating system, so that the operating system has hardware independence, and the application can be conveniently transplanted on various platforms; the application framework layer realizes the core function of the traditional system, plays a supporting role on an upper application layer, and comprises: window management, file management, application management and other modules; the application layer runs various business applications directly facing users; the cloud part of the terminal cloud integrated operation system is uniformly placed with the parts needing security protection such as the application and the user data in the traditional software system, the user data is not stored in the terminal, and the security of the application and the data is ensured through cloud technologies such as cloud storage.
And the terminal cloud realizes cooperative interaction between the two sides of the terminal cloud by maintaining a network interaction channel. And three network channels, namely a control information instruction channel, a terminal cloud cooperative rendering channel and an audio and video data interaction channel, are established on two sides of the terminal cloud, so that control information/instruction interaction, terminal cloud cooperative rendering and multimedia cooperative processing are realized.
Fig. 4 shows a block diagram of a preferred embodiment of an end cloud user interaction method according to the present invention.
At 410, the terminal uploads the man-machine interaction instruction, the system event and/or the message to the cloud terminal through the control information instruction channel, and the cloud terminal updates and distributes the message list based on the received man-machine interaction instruction, system event and/or message, and the cloud terminal invokes the terminal kernel program through the control information instruction channel to complete specific operation of the terminal. For example: the terminal hot plug one peripheral equipment needs to update an event list, and notifies the event list to an upper layer application, so that the application can use (know)/release the peripheral equipment; the terminal sends a message to the cloud message list in synchronization with the operation of the application window, the window manager distributes the message to the target application window, and the window receives the message to perform feedback operation according to the logic of the window. Fig. 5 shows a schematic diagram of a user interaction with an operating system through an end cloud control information/instruction channel. Specifically, the terminal side processes event information, control instructions and data input by a user (such as a mouse, a keyboard, touch control and the like) through the terminal system, and then sends system events and messages of the terminal to the cloud side through a control information/instruction channel. The cloud side application framework layer related functional modules (such as a window manager) receive events and messages, update a message list and distribute (such as specific application windows and application threads), the application layer receives system messages and reacts, the application layer calls a terminal kernel program in a system call mode through an end cloud control information/instruction channel by the application framework layer, specific terminal operation is completed, and one-time complete user interaction operation is achieved.
And at 420, performing end cloud collaborative graphics rendering through an end cloud collaborative rendering channel and outputting graphics at the terminal, wherein the cloud end updates graphics rendering data of the terminal through the end cloud collaborative rendering channel. FIG. 6 illustrates an end cloud collaborative rendering channel interaction diagram in accordance with an embodiment. The terminal cloud integrated operation system realizes the graphic collaborative rendering of the terminal cloud through a terminal cloud collaborative rendering channel, the cloud side updates the relevant data of the graphic rendering of the terminal through the channel, and the system divides the terminal cloud collaborative rendering into different collaborative levels.
Firstly, different windows of different application programs are used as an independent rendering node, and possess independent graphic context of current drawing environment of each window, and the independent graphic context comprises parameters required by a drawing system to realize subsequent drawing operation and all information of designated equipment, and defines basic drawing characteristics, such as: colors, clipping regions, line width and style, font information, compound options, etc.; the graphics context of each rendering node is responsible for managing and maintaining independent graphics resources, such as textures and frame buffer objects, required by each window drawing; the GUI graphic system is responsible for managing a group of rendering node queues, corresponding to all application program windows of the desktop, calling an API interface by a graphic rendering instruction when an application program initiates a drawing operation, and submitting an operation instruction and data to an instruction queue of a rendering node; the bottom graphic system reads the operation instructions and data in the queue, performs calculation execution through a rendering pipeline, and notifies the system that the window rendering target changes after the rendering is finished, and the window area on the corresponding screen needs to be updated; and the GUI graphic system performs a new window mixing operation, updates the frame buffer of the display system, and finally reads the updated frame buffer and displays the updated frame buffer on a screen.
The graphics rendering of the operating system and outputting the whole workflow can be generally divided into 4 steps:
1. dynamically initiating window drawing operation by an application program, preparing graphic rendering operation and data (such as points, lines, colors, positions, buffer areas, operation, setting and the like) to be drawn, transferring the operation and the data to an underlying graphic system through API call, and adding the operation and the data into an instruction queue of a rendering node;
2. the rendering nodes of each window sequentially read the operation instructions and the data, and obtain graph rendering resources (such as textures and frame buffer objects) updated by each application program through the calculation of a bottom layer rendering pipeline;
3. the graphics system is based on a display scene graph (scene graph) of each application program, based on updated graphics resources and context, such as: mixing and assembling textures, positions, areas, attributes and the like of all display units to generate bitmap output of each window, such as: frame buffer for each application window;
4. and finally, updating each application program window by the graphic system, mixing the application program windows in the corresponding screen area to generate a new desktop bitmap, writing the new desktop bitmap into a system video memory, reading the system video memory by the display controller, and displaying and outputting the final desktop bitmap.
The end cloud collaborative rendering adopts the specific rendering operation steps which are respectively arranged at the end or cloud side, and the distribution is performed cooperatively.
In an embodiment, the cloud end is responsible for completing the graphics rendering work of each window rendering node in thesteps 1, 2 and 3 to obtain an updated window bitmap; when a drawing operation is initiated by a reference program window or a desktop window, namely a rendering node list is changed (such as window construction, destruction, movement, state and relative position change), a cloud side synchronizes a rendering node window bitmap and a rendering node list which need to be updated to an end side through a network; the mixed output of the 4 th step operation window is left to be completed by the end side. In this collaborative rendering mode, the network bandwidth requirements are closely related to the performance of the end side display device (i.e., the format and size of the desired synchronized end side screen window bitmap); the performance requirement on the cloud side is high, the dependence is relatively large, and most of the graphic rendering work of the system is mainly completed by the cloud side; the requirements on the software and hardware environment of the opposite end side are low, and the lightweight terminal can be well finished only by being responsible for finishing small amount of work such as window mixing. The collaborative rendering mode has the advantages that the requirements on the software and hardware environment of the end side are not high, but the requirements on the network bandwidth are met, and the performance of the display equipment of the end side is depended. When the high-end display equipment is adopted, very high requirements are put on network bandwidth, and meanwhile, the requirements of the system on the cloud end are relatively high. Therefore, the collaborative rendering mode is suitable for embedded terminals or low-end mobile terminals with lower configuration of display equipment and wearable embedded equipment under the condition of low network bandwidth, such as intelligent watches and the like.
In another embodiment, the cloud side is only responsible for the work of the step 1, and pushes the operation instruction and data of the application program to the end side; the remainingsteps 2, 3, 4, i.e. all graphics rendering work is done by the end-side GUI graphics system, as shown in fig. 6. And drawing operation is initiated in the application program window or the desktop window, namely the rendering node list is changed (such as window construction, destruction, movement, state and relative position change), the cloud side needs to update the instruction queue or list information of the rendering node of the end side display system, and the final end side executes the operation to complete the whole rendering operation flow. In the collaboration mode, the end/cloud needs to update the rendering operation instruction, the operation data and the rendering node list information synchronously, wherein the required network bandwidth is closely related to the size of the operation data which needs to be synchronized actually, and the number of the operation data is small in general; the requirements on the opposite end side are highest, because the terminal is required to complete all graphic rendering operations, the graphic rendering work can be well completed only by a relatively complete software and hardware operation environment (such as a GPU rendering pipeline, a font engine, a GUI graphic system and the like); the cloud side has the lowest requirement, and the cloud side only needs to push the operation instruction and the data and does not need to complete the actual graphic rendering work. The collaborative rendering mode has the advantages that the requirements on network bandwidth and cloud side computing resources are not high, but the requirements on terminal software and hardware environments are high, the functions of a complete graphic GUI system are required to be provided, and the size of operation data which is required to be synchronized by the terminal/cloud is limited by the network bandwidth. Therefore, the collaborative rendering mode is suitable for scenes with low network bandwidth requirements and cloud side needs to support a large number of terminals online at the same time.
In a further embodiment, the cloud side is responsible for completing thesteps 1 and 2 and completing the rendering and updating work of the basic graphic resources of each rendering node of the application window; the cloud side synchronizes a rendering node list to be updated and a context environment and graphic resources updated by the rendering node to the end side through a network; the remaining operations 3 and 4 are completed by the end side. In order to make the operation sensitivity and frame rate of the end cloud collaborative rendering reach the best effect, the end cloud rendering architecture flow can be shown as follows:
the cloud system preloads resources such as pictures, characters, color states and the like required by rendering to form textures in a starting stage, and synchronously uploads the textures to the end-side GPU to realize rendering resource sharing among different processes of the cloud.
The cloud application is in the rendering phase:
maintaining independent rendering nodes in different windows of different applications, and synchronizing node information to an end side for maintenance only when the number of the rendering nodes changes;
when the cloud application generates a rendering instruction draw-op, the cloud application does not immediately submit and render to the end-side GPU, but caches the rendering instruction draw-op in a trunk queue of a rendering node, and waits for VSYNC signal notification of the end-side GPU;
when each rendering node of the cloud receives the vsync notification, rearranging the cached chunk instruction queue according to the drawing sequence, and simultaneously merging adjacent rendering instructions according to the rendering instruction type, whether to use cache resources, whether to generate overlapping shielding and other effects;
synchronizing the rearranged and combined rendering instruction queues to each rendering node queue at the end side, enabling the end side GPU to execute drawing rendering by utilizing the graphics resource data cached in the GPU in a VSYNC interval period (16 ms) reported by the graphics hardware after receiving the rearranged and combined rendering instruction queues, and updating a rendering result to an end side screen through window synthesis to finish cloud-to-end one-frame rendering.
The mechanism ensures that the data bandwidth of the rendering instruction in the transmission process is minimized through the caching, rearrangement and merging operation strategies, and meanwhile, the rendering operations (such as drawing rectangles, updating textures and the like) of the same type of the adjacent instruction furthest reduce the state switching of a GPU rendering pipeline, so that the GPU efficiency is improved. Different processes share rendering resources through a pre-loading resource mechanism, so that the utilization rate of the video memory is improved, and the optimal rendering time of each frame is ensured to the greatest extent.
In the collaborative rendering mode, the cloud side and the end side adopt a distributed collaborative mode, so that the load of the cloud is reduced, the delay time of the application can be effectively reduced under the condition that the resources of the cloud side are limited, and the response speed of the system is improved; the data volume of the end/cloud needs to be synchronized, a large amount of newly created graphic rendering resources on the cloud side need to be synchronized to the end side except in the starting stage of the application program, higher network bandwidth is needed, and in the running process of the application program, the data volume of the cloud side graphic rendering resources dynamically updated to the end side is generally not large, and only less network bandwidth is needed; the end side and the cloud side are required to have certain software and hardware environments, and because the end/cloud side is required to independently complete the respective responsible graphic rendering work, the end side is not required to have the complete capability of a GUI display system in the traditional sense (such as drawing primitives, drawing character sets, bitmap format support, time management and the like) so as to complete the responsible graphic rendering work. The collaborative rendering mode has the advantages that: the requirement on network bandwidth is not high in the application running process; based on the relatively low end-side configuration, higher display performance is achieved; when the application scenes are coordinated and interacted by a plurality of users based on the same scene, because the rendering resources of the same scene are basically the same, a plurality of terminals can be multiplexed at the same time, and the utilization rate of cloud side computing resources is effectively improved. There are certain disadvantages, such as high network bandwidth requirements during the application start-up phase; the performance of the cloud side and the end side are required to be certain, and the basic graphics rendering software and hardware environment is required to be provided. Therefore, the 2 collaborative rendering mode is suitable for scenes based on the same scene and a plurality of end user interaction applications, such as 2D, 3D games, designs, VR/AR and the like, which have higher requirements on network bandwidth.
At 430, the terminal plays the cloud audio/video resource through the audio/video data interaction channel and/or uploads the collected audio/video to the cloud. Fig. 7 shows an audio-video data interaction channel interaction schematic. The multimedia processing (multimedia pipeline (pipe) part function is realized in the figure) functional module is realized in the multimedia framework, and two modes of realizing in the terminal or cloud multimedia framework can be adopted according to the configuration and other conditions of the terminal equipment. And the cloud audio/video resource terminal plays: the cloud pushes audio/video data to the terminal through an audio/video interaction channel of the terminal, the data flow of the audio/video resource is generally analyzed, buffered, decoded and the like to obtain separated audio/image data, and the audio/image system is called to output through calling the audio and image interfaces of the multimedia framework of the terminal. Terminal audio/video collection and uploading cloud: the terminal audio frequency collection device (MIC) or the image collection device (Camera) inputs the audio/image data collected by the terminal device into the multimedia processing module through the interface, and the final audio/video data stream is synthesized through the multimedia pipeline generally through the working procedures of audio/video data processing, decoding, buffering, synthesizing and the like. And uploading related audio/video data to the cloud end by the terminal through the cloud end audio/video interaction channel. In the playing or collecting process of the audio/video terminal, interactive operations (such as fast forward, pause, playing time inquiry, format adjustment, playing end and the like) of information, events, inquiry and the like generated by the system and the application need to realize the coordination between the end clouds through the audio/video interactive channels of the end clouds.
The cloud end can abstract various terminal hardware devices by adopting a virtual device layer, so that the aim of decoupling a cloud end application and the terminal hardware devices is fulfilled, one-time development and cloud deployment of specific applications are realized, and cross-terminal application of users is realized. The extremely simple terminal can adopt the extensible lightweight kernel or microkernel technology to improve the expandability of the system, flexibly adapt to various types and specifications of hardware equipment and improve the reusability of the system. And the user data is stored and the application is clouded, so that the user data is not stored on the terminal equipment, and the safety of the application and the data is improved. And through maintaining three interaction channels of the end cloud, the collaborative interaction of the end cloud is comprehensively realized. The specific function implementation part of the terminal cloud collaborative rendering and the multimedia collaborative processing can adopt a dynamic or static collaborative mode according to the conditions of terminal equipment configuration, network environment and the like, and can be flexibly distributed to a terminal or a cloud, so that the influence of network communication on a system can be effectively reduced, and the instantaneity, sensitivity and stability of the terminal cloud integrated operation system are improved.
Fig. 8 is a block diagram of a preferred embodiment of an end cloud user interaction system according to the present invention, wherein a manipulation information instruction channel, an end cloud collaborative rendering channel and/or an audio/video data interaction channel are established between a terminal and a cloud. The system comprises: the control information instruction interaction module 810 is configured to enable the terminal to upload a man-machine interaction instruction, a system event and/or a message to the cloud end through the control information instruction channel, update and distribute a message list based on the received man-machine interaction instruction, system event and/or message, and call a terminal kernel program to complete specific operation of the terminal through the control information instruction channel; and/or an end cloud collaborative rendering module 820, configured to perform end cloud collaborative graphics rendering via the end cloud collaborative rendering channel and output graphics at a terminal, where the cloud end updates graphics rendering data of the terminal via the end cloud collaborative rendering channel; and/or an audio/video data interaction module 830, configured to enable the terminal to play cloud audio/video resources through the audio/video data interaction channel and/or upload the collected audio/video to the cloud. In an embodiment, the end cloud collaboration mode of the end cloud collaborative graphics rendering and/or the multimedia collaborative processing of the audio and video data interaction may be dynamically or statically determined according to terminal device configuration, network environment, and the like.
In another embodiment, the present invention provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the method embodiments shown and described in connection with fig. 3-7 or other corresponding method embodiments, which are not described in detail herein.
In another embodiment, the present invention provides a computer device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor, where the steps of the method embodiments shown and described in connection with fig. 3-7 or other corresponding method embodiments are implemented by the processor when the computer program is executed, and are not repeated herein.
The various embodiments described herein, or particular features, structures, or characteristics thereof, may be combined as suitable in one or more embodiments of the invention. In addition, in some cases, the order of steps described in the flowcharts and/or flow-line processes may be modified as appropriate and need not be performed in exactly the order described. Additionally, various aspects of the invention may be implemented using software, hardware, firmware, or a combination thereof and/or other computer-implemented modules or devices that perform the described functions. A software implementation of the present invention may include executable code stored in a computer readable medium and executed by one or more processors. The computer-readable medium may include a computer hard drive, ROM, RAM, flash memory, a portable computer storage medium such as CD-ROM, DVD-ROM, flash drives and/or other devices having a Universal Serial Bus (USB) interface, and/or any other suitable tangible or non-transitory computer-readable medium or computer memory on which executable code may be stored and executed by a processor. The invention may be used in connection with any suitable operating system.
As used herein, the singular forms "a", "an" and "the" include plural referents (i.e., having the meaning of "at least one") unless otherwise indicated. It will be further understood that the terms "has," "comprises," "including" and/or "comprising," when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
While the foregoing is directed to some preferred embodiments of the present invention, it should be emphasized that the present invention is not limited to these embodiments, but may be embodied in other forms within the scope of the inventive subject matter. Various changes and modifications may be made by one skilled in the art without departing from the spirit of the invention, and these changes or modifications still fall within the scope of the invention.