Movatterモバイル変換


[0]ホーム

URL:


CN116208725A - Video processing method, electronic device and storage medium - Google Patents

Video processing method, electronic device and storage medium
Download PDF

Info

Publication number
CN116208725A
CN116208725ACN202211435822.3ACN202211435822ACN116208725ACN 116208725 ACN116208725 ACN 116208725ACN 202211435822 ACN202211435822 ACN 202211435822ACN 116208725 ACN116208725 ACN 116208725A
Authority
CN
China
Prior art keywords
video data
video
paths
view angle
angle adjustment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211435822.3A
Other languages
Chinese (zh)
Inventor
唐红兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuanjingshengsheng Beijing Technology Co ltd
Original Assignee
Yuanjingshengsheng Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuanjingshengsheng Beijing Technology Co ltdfiledCriticalYuanjingshengsheng Beijing Technology Co ltd
Priority to CN202211435822.3ApriorityCriticalpatent/CN116208725A/en
Publication of CN116208725ApublicationCriticalpatent/CN116208725A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The embodiment of the application provides a video processing method, electronic equipment and a storage medium. Video data is acquired through a plurality of image acquisition devices arranged at the same acquisition point, wherein the acquisition angles of the plurality of image acquisition devices are different, and each image acquisition device correspondingly acquires one path of video data, and the method comprises the following steps: providing a video page, wherein the video page is used for playing video data; responding to a view angle adjustment instruction received by the video page, and determining a view angle vector; generating a viewing angle adjustment request based on the viewing angle vector, and sending the viewing angle adjustment request; receiving at least two paths of video data, wherein the at least two paths of video data are determined based on the visual angle adjustment request; mapping the at least two paths of video data according to the view angle model to obtain video data which are correspondingly output; and playing the output video data on the video page.

Description

Video processing method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a video processing method, an electronic device, and a storage medium.
Background
With the development of technology, panoramic functions are increasingly provided to users so that the users can watch panoramic videos. For example, an on-line virtual exhibition provides panoramic video, which provides a three-dimensional (3D) virtual camera for each user viewing the virtual exhibition, so that the user can freely rotate the angle, thereby being able to freely adjust the angle to view the objects of the exhibition.
However, in the above manner, a 3D virtual camera is configured for each user, that is, the video stream of each user is independent, and accordingly, an independent rendering instance needs to be set for each user to render video data for transmission to the client for display. The server side is usually connected to a large number of users, and resources such as a 3D virtual camera and a video stream rendering are respectively provided for each user, which require higher hardware performance and the like.
Disclosure of Invention
The embodiment of the application provides a video processing method for reducing the consumption of hardware resources.
Correspondingly, the embodiment of the application also provides electronic equipment and a storage medium, which are used for ensuring the realization and the application of the system.
In order to solve the above-mentioned problem, an embodiment of the present application discloses a video processing method, which collects video data through a plurality of image collection devices disposed at the same collection point, wherein collection angles of the plurality of image collection devices are different, and each image collection device correspondingly collects one path of video data, the method includes:
Providing a video page, wherein the video page is used for playing video data;
responding to a view angle adjustment instruction received by the video page, and determining a view angle vector;
generating a viewing angle adjustment request based on the viewing angle vector, and sending the viewing angle adjustment request;
receiving at least two paths of video data, wherein the at least two paths of video data are determined based on the visual angle adjustment request;
mapping the at least two paths of video data according to the view angle model to obtain video data which are correspondingly output;
and playing the output video data on the video page.
Optionally, the acquisition angles of adjacent image acquisition devices in the plurality of image acquisition devices are orthogonal.
Optionally, the determining, in response to the view angle adjustment instruction received by the video page, a view angle vector includes:
responding to the visual angle adjustment instruction received by the video page, and acquiring visual angle adjustment information;
and converting the visual angle adjustment information to determine a corresponding visual angle vector.
Optionally, the step of receiving a viewing angle adjustment instruction in response to the video page includes at least one of:
receiving a visual angle adjustment instruction in response to the trigger of the visual angle adjustment control in the video page;
And responding to the adjustment operation of the external equipment associated with the video page, and receiving a corresponding visual angle adjustment instruction.
Optionally, the mapping processing is performed on the at least two paths of video data according to a view angle model to obtain video data corresponding to the output, including:
analyzing the at least two paths of video data to obtain at least two paths of analyzed video data;
and performing map rendering on at least two paths of analysis video data according to the view angle model to obtain video data which are correspondingly output.
Optionally, performing map rendering on at least two paths of resolved video data according to the view angle model to obtain video data corresponding to the output, where the performing includes:
determining the veneering identification corresponding to each path of video data;
determining the veneering position of the corresponding path analysis video data according to the veneering identification;
and rendering the at least two paths of analysis video data as textures corresponding to the veneering positions in the visual angle model to obtain corresponding output video data.
Optionally, the method further comprises:
creating a perspective model in advance based on a rendering engine, the perspective model comprising at least one of: a cube model, a sphere model.
Optionally, the creating the perspective model based on the rendering engine in advance includes:
Initializing a view angle model based on a rendering engine in advance;
determining a plurality of veneering positions corresponding to the visual angle model and setting corresponding veneering marks;
and determining a texture filtering algorithm of the view angle model, and generating a corresponding view angle model based on the shader.
Optionally, the method further comprises:
the method comprises the steps of connecting with a server, and establishing a data transmission channel and at least one audio and video stream transmission channel;
the sending the view angle adjustment request includes:
sending the view angle adjustment request through the data transmission channel;
the receiving at least two paths of video data comprises:
and receiving at least two paths of video data through the at least two audio and video streaming transmission channels.
The embodiment of the application also discloses a video processing method, which collects video data through a plurality of image collection devices arranged at the same collection point, wherein the collection angles of the plurality of image collection devices are different, and the method comprises the following steps:
providing a video page, wherein the video page is used for playing video data;
receiving a view angle adjustment request, wherein the view angle adjustment request comprises a view angle vector, and the view angle vector is determined based on a view angle adjustment instruction received by a video page;
Analyzing the view angle vector to determine at least two path veneers;
acquiring at least two paths of video data according to the at least two paths of veneer identifiers;
and sending the at least two paths of video data so as to carry out mapping processing on the at least two paths of video data according to the visual angle model at the client and output video data for adjusting the visual angle.
Optionally, the analyzing the view angle vector determines at least two path overlay identifiers, including:
mapping the view angle vector and determining corresponding veneering parameters;
and determining at least two paths of veneering marks according to the veneering parameters.
Optionally, the method further comprises:
the method comprises the steps of connecting with a client, and establishing a data transmission channel and at least one audio/video stream transmission channel;
the receiving a viewing angle adjustment request includes: receiving a viewing angle adjustment request through the data transmission channel;
the transmitting the at least two paths of video data includes: and transmitting the at least two paths of video data through the at least two audio and video streaming transmission channels.
Optionally, the acquisition angles of adjacent image acquisition devices in the plurality of image acquisition devices are orthogonal, and the method further includes:
receiving video data from a plurality of image capturing devices, respectively;
Determining a veneering mark according to the acquisition angle of the image acquisition equipment;
and establishing a corresponding relation between the video data and the overlay mark.
The embodiment of the application also discloses a video processing method, which collects video data through a plurality of image collection devices arranged at the same collection point, wherein the collection angles of the plurality of image collection devices are different, and the method comprises the following steps:
providing a video page, wherein the video page is used for playing video data;
responding to a view angle adjustment instruction received by the video page, and determining a view angle vector;
generating a view angle adjustment request based on the view angle vector, sending the view angle adjustment request to determine at least two paths of video data according to the view angle vector, and mapping the at least two paths of video data according to a view angle model to obtain the video data with the view angle adjusted;
receiving video data of visual angle adjustment;
and playing the video data with the adjusted visual angle on the video page.
The embodiment of the application also discloses a video processing method, which collects video data through a plurality of image collection devices arranged at the same collection point, wherein the collection angles of the plurality of image collection devices are different, and the method comprises the following steps:
Providing a video page, wherein the video page is used for playing video data;
receiving a view angle adjustment request, wherein the view angle adjustment request comprises a view angle vector, and the view angle vector is determined based on a view angle adjustment instruction received by a video page;
analyzing the view angle vector to determine at least two path veneers;
acquiring at least two paths of video data according to the at least two paths of veneer identifiers;
mapping the at least two paths of video data according to the view angle model to obtain video data with the adjusted view angle;
and transmitting the video data with the adjusted visual angle so as to output the video data with the adjusted visual angle at the client.
The embodiment of the application also discloses a video processing method, which collects video data through a plurality of image collection devices arranged at the same collection point, wherein the collection angles of the plurality of image collection devices are different, and the method comprises the following steps:
responding to a view angle adjustment instruction received by a video page, and determining a view angle vector;
analyzing the view angle vector to determine at least two path veneers;
acquiring at least two paths of video data according to the at least two paths of veneer identifiers;
and mapping the at least two paths of video data according to the view angle model to obtain video data with the adjusted view angle, so as to output the video data with the adjusted view angle on a video page.
The embodiment of the application also discloses electronic equipment, which comprises: a processor; and a memory having executable code stored thereon that, when executed by the processor, performs a method as described in embodiments of the present application.
One or more machine-readable media having stored thereon executable code that, when executed by a processor, performs a method as described in embodiments of the present application are also disclosed.
Compared with the prior art, the embodiment of the application has the following advantages:
in this embodiment of the present invention, a video page is provided, where the video page is used to play video data, respond to a viewing angle adjustment instruction received by the video page, determine a viewing angle vector, generate a viewing angle adjustment request based on the viewing angle vector, send the viewing angle adjustment request, and then receive at least two paths of collected video data, where the video data is collected by a plurality of image collecting devices disposed at the same collection point, where the collection angles of the plurality of image collecting devices are different, and each image collecting device correspondingly collects one path of video data. So that at least two paths of video data can be determined based on the viewing angle, so that a user can freely adjust the viewing angle for viewing. And then, mapping the at least two paths of video data according to the visual angle model to obtain the video data which are correspondingly output and play the video data, so that the rendering of the at least two paths of video data is realized at the client side, the dependence on hardware resources of the server side is reduced, and the user experience is improved.
Drawings
FIG. 1 is an interactive schematic diagram of an embodiment of a video processing method of the present application;
FIG. 2 is a perspective schematic diagram of an example perspective model of a cube according to an embodiment of the present application;
FIG. 3 is a flowchart of steps on a client side of an embodiment of a video processing method of the present application;
FIG. 4 is a flowchart illustrating steps of a server side of an embodiment of a video processing method according to the present application;
FIG. 5 is a flowchart of steps on the client side of an alternative embodiment of a video processing method of the present application;
FIG. 6 is a flowchart illustrating steps at a server side of an alternative embodiment of a video processing method according to the present application;
FIG. 7 is a flowchart of steps on a client side of another video processing method embodiment of the present application;
FIG. 8 is a flowchart illustrating steps at a server side of another embodiment of a video processing method according to the present application;
FIG. 9 is a flowchart of steps of another embodiment of a video processing method of the present application;
fig. 10 is a schematic structural view of an exemplary apparatus provided in one embodiment of the present application.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
The embodiment of the application can be applied to various processing scenes based on videos, such as online video exhibition, multi-place-connection video conference, video teaching and the like, which mainly take part in a video scene. The embodiment of the application provides a panoramic video, which can acquire multiple paths of video data through an acquisition end, wherein a service end transmits a plurality of paths of video data required by a user based on the visual angle of the user, and then the video data are rendered and displayed at a client end, so that the service end is not required to process videos of each user, and resources such as hardware of the service end are saved.
The video processing system of the embodiment of the application comprises: the system comprises a video acquisition end, a service end and a client end, wherein the video acquisition end is used for acquiring video data, the service end is used for managing the video data, and the client end is used for playing the video data. In this embodiment of the application, the video acquisition end includes a plurality of image acquisition equipment, such as camera, and each image acquisition equipment sets up at same acquisition point, but the angle of gathering is different, and wherein, the acquisition angle of adjacent image acquisition equipment meets but does not overlap. Each image acquisition device acquires one path of video data, and forms a video watched by a user through multiple paths of video data.
The image acquisition terminal is arranged in an environment where videos need to be acquired, a plurality of image acquisition devices such as cameras are arranged in the environment to form a three-dimensional video acquisition scene, wherein the image acquisition devices such as the cameras are positioned on the same acquisition point of a world coordinate system in the environment and are set as C points, and the directions of the image acquisition devices such as the cameras are different, namely the acquisition angles are different. In one example, 6 image capturing devices such as cameras are provided at point C so as to face 6 faces of a cube built centering on point C, respectively, thereby capturing video data of each face. The world coordinate system is an absolute coordinate system of the system, and the origin of the coordinate system is used for determining the position of each object in the coordinate system.
In this embodiment of the present application, after the positions of the image capturing devices such as the cameras are set, the capturing parameters of the image capturing devices such as the cameras, for example, the lens capturing angle and the lens parameters such as the range, may be adjusted, so that the image capturing devices such as the cameras capture images of respective orientations, and there is no overlapping area between the captured images, and there is no gap between the captured images, so as to form a seamless panoramic video.
After the image acquisition end acquires the multiple paths of video data, each path of video data can be transmitted to the server, and the video data are distributed to the user end requesting the video through the server. In the embodiment of the application, in order to reduce consumption of resources such as hardware and bandwidth of a cloud, panoramic rendering processing of videos is performed on a client, so that users can watch the videos conveniently.
In an embodiment of the present application, a perspective model is created in advance based on a rendering engine, the perspective model including at least one of: a cube model, a sphere model. A view model may be established at the client, the view model being a panoramic model based on which multiple paths of video data may be rendered into panoramic video data, the panoramic video data based on the user's view being provided based on the view model. The view angle model can be formed by various panorama models, such as a spherical model, a pyramid model, a cube model and the like. In the panoramic model, some view angles are view angles watched by a user, and some view angles are view angles which cannot be watched by the user, so that the view angle model can determine at least two paths of video data required based on the view angles of the user, and output video data can be rendered, thereby reducing the amount of rendering resources. In this embodiment, taking a view angle model formed by a sky box as an example, the sky box can be implemented by using a cube model or a sphere model.
The pre-creating a perspective model based on a rendering engine includes: initializing a view angle model based on a rendering engine in advance; determining a plurality of veneering positions corresponding to the visual angle model and setting corresponding veneering marks; and determining a texture filtering algorithm of the view angle model, and generating a corresponding view angle model based on the shader.
In this example, a view angle model is constructed by taking a sky box of a cube model as an example. The cube space box can be understood as a cube, six faces can be obtained by expanding the cube space box, each face corresponds to one path of video data, and accordingly mapping is conducted on each face of the six faces based on pictures of the video data. For the user, the visual angle is located at the inner center point of a three-dimensional space (a cubic sky box), namely, the periphery of the three-dimensional space is formed by a large cube, and the three-dimensional space comprises six planes including an upper plane, a lower plane, a left plane, a right plane, a front plane and a rear plane, and different pictures can be seen when the visual angle is rotated. Video data can be provided according to the plane in which the user's viewing angle is located.
After the view angle model is determined, a rendering engine can be used to implement the space box and construct the view angle model. Among them, the rendering engine may employ various image rendering engines such as OpenGL, direct3D, metal, vulkan. OpenGL (Open Graphics Library ) is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D (two-dimensional), 3D vector graphics. The interface consists of a plurality of different function calls for drawing three-dimensional scenes ranging from simple graphic bits to complex ones, and is commonly used in the fields of computer-aided design (Computer Aided Design, CAD), virtual reality, visualization programs, electronic game development, etc. Direct3D is a 3D graphics API based on the generic object schema (Common Object Mode, COM) of a system. Metal is a low-level rendering application programming interface that provides the lowest level required by software, ensuring that software can run on different graphics chips, and the Metal framework enables applications to directly access the Graphics Processing Units (GPUs) of the device. With Metal, applications can quickly render complex scenes and run computing tasks in parallel with GPUs.
Taking OpenGL implementation of a perspective model based on a space box as an example, a model of a cube map (cube map) may be created, such as using a map-type flag to create a corresponding map. Then, the video pictures of the cube map are loaded. If video data is used as a variable, each frame of video picture of the video stream is represented. Wherein, 6 texture styles may be defined according to OpenGL, respectively representing 6 graphs of the cube. In the embodiment of the application, in order to reduce the waste of resources, 6 faces are not required to be mapped completely when the client side realizes the space box, but video data of 2-3 faces are determined according to the view angle vector of the user to be mapped.
After the map loading is completed, the filter algorithm parameters of the cube texture are set. And then implementing the Shader program (loader) of OpenGL. For example, for a Vertex Shader (Vertex Shader), the w-component in the coordinate quad is set constant to 1.0 so that the map of the cube space box is always visible. A Fragment Shader (Fragment Shader) may also be provided to enable sampling of video data.
Therefore, visual angle models such as a space box and the like can be realized at the client, and corresponding video data can be displayed based on mapping and rendering of the visual angle models in the follow-up process.
Referring to fig. 1, an interactive schematic diagram of an embodiment of a video processing method of the present application is shown.
Step 102, a transmission channel is established between the client and the server.
Two types of transmission channels are established between the client and the server, the first type of transmission channel is a control data transmission channel for transmitting control commands, the second type of transmission channel is an audio/video data transmission channel for transmitting audio/video streams, and the audio/video stream transmission channel can comprise at least one audio/video stream transmission channel. Therefore, 2 and 3 audio and video stream transmission channels can be established between the client and the service end to transmit multi-path audio and video streams. For example, the client and the server may establish a connection and establish a transmission Channel based on a Web Real-time communication (WebRTC) technology, and establish a control Data Channel (Data Channel) and an audio-video transmission Channel (Audio Media Channel/Video Media Channel). The user's interaction commands are transmitted through the data channel, such as a perspective vector determined based on the user's adjustment of the perspective, etc. And transmitting the audio and video data through the audio and video transmission channel. The WebRTC allows a network end to establish a Peer-to-Peer (Peer-to-Peer) connection without an intermediary, so as to realize transmission of video streams, audio streams and/or any other data.
In step 104, the server provides the page data to the client.
And 106, the client displays the video page based on the page data.
After the server side establishes connection with the client side, page data can be provided for the client side, so that the client side renders the page data and displays the corresponding video page. The transmission channel may be established after the page data is provided, which is not limited in the embodiment of the present application.
In step 108, the client determines a view vector in response to the view adjustment instruction received by the video page.
After the video page is displayed, the video data can be displayed according to a default view angle, and the corresponding server side can transmit at least two paths of video data corresponding to the default view angle to the client side, and render and play the video data at the client side. In the process of watching video through the client, the user can adjust the visual angle so as to watch the video picture with the required angle. Wherein the user may adjust the viewing angle in a number of ways, such as in some examples having a view angle adjustment control provided in the video page, through which the user may adjust the viewing angle. In other examples, the user may adjust the viewing angle through various peripherals (external devices), such as adjusting the viewing angle based on a keyboard, mouse, handle, etc., external settings, and the like. Accordingly, the video page may receive a view angle adjustment instruction generated by a control, a peripheral, etc. for adjustment of a view angle, and determine an adjusted view angle vector in response to the view angle adjustment instruction.
Wherein determining a view vector in response to a view adjustment instruction received by the video page comprises: responding to the visual angle adjustment instruction received by the video page, and acquiring visual angle adjustment information; and converting the visual angle adjustment information to determine a corresponding visual angle vector. A step of receiving a viewing angle adjustment instruction in response to the video page, comprising at least one of: receiving a visual angle adjustment instruction in response to the trigger of the visual angle adjustment control in the video page; and responding to the adjustment operation of the external equipment associated with the video page, and receiving a corresponding visual angle adjustment instruction.
The user can adjust the viewing angle through the viewing angle adjustment control, and the corresponding viewing angle adjustment instruction is received in response to the triggering of the viewing angle adjustment control. The viewing angle adjustment instruction includes viewing angle adjustment information, such as an adjustment angle. The visual angle adjusting control can be a control on a video page, the visual angle adjusting control can be an invisible control through peripheral equipment such as a mouse, and the visual angle adjusting control can also be responded through a sensor such as a gyroscope. For example, when a user views a video through a mobile terminal such as a mobile phone, the angle, the position and the like of the mobile phone end can be adjusted to adjust the visual angle, and a sensor such as a corresponding gyroscope can be used as a control to respond to the adjustment of the visual angle to generate a visual angle adjustment instruction. In other examples, the viewing angle may also be adjusted directly based on the keyboard, the mouse, the handle, and the like, and the corresponding viewing angle adjustment instruction is received in response to the adjustment operation of the external device associated with the video page. For example, for a terminal such as a notebook, when a user views a video, the user can adjust the viewing angle through the mouse, and in response to the adjustment operation, a corresponding viewing angle adjustment instruction can be received. The viewing angle adjustment information may be adjusted viewing angle information or viewing angle offset information, such as an adjusted viewing angle, offset angle, etc., with respect to the viewing angle before adjustment. In other examples, the viewing angle adjustment information may also be an amount of peripheral adjustment, such as a mouse moving distance, an angle, etc., and accordingly the viewing angle may be determined based on the physical amount, and then the adjusted viewing angle vector may be determined based on the current viewing angle vector, the adjusted viewing angle, or the viewing angle offset angle.
So that the view can be adjusted at the client and a corresponding view vector determined.
Step 110, the client generates a view angle adjustment request based on the view angle vector, and sends the view angle adjustment request.
After the view angle vector is determined, a view angle adjustment request can be generated, the view angle vector is carried in the view angle adjustment request, and then the view angle adjustment request is sent to the server. Wherein, the client can send the view angle adjustment request through the data transmission channel.
And 112, the server analyzes the view angle vector and determines at least two path veneers.
In this embodiment of the present application, image capturing devices such as a plurality of cameras are disposed at a capturing position, for example, point C, where video data captured by each image capturing device corresponds to a surface corresponding to the expansion of the view angle model, for example, if the view angle model of the cubic space box described above, six image capturing devices are disposed correspondingly. That is, the number of image capturing devices corresponds to the number of faces developed by the view angle model, and each image capturing device may correspond to a overlay mark, and the video data captured by the image capturing devices is mapped to the face corresponding to the overlay mark to perform the required processing. The server side can set a corresponding overlay mark for the video data acquired by each image acquisition device, or takes the overlay mark as an attribute value of one attribute of the image acquisition device, and the like.
The server side receives video data from a plurality of image acquisition devices respectively; determining a veneering mark according to the acquisition angle of the image acquisition equipment; and establishing a corresponding relation between the video data and the overlay mark. After the image acquisition equipment is set, the server side receives video data from a plurality of image acquisition equipment respectively after starting the work, and can determine corresponding overlay marks based on the acquisition angle of each acquisition equipment, thereby establishing the corresponding relation between the video data and the overlay marks, and can determine the video data of a required road based on the overlay marks.
After receiving the view angle adjustment request, the server side can acquire a view angle vector from the view angle adjustment request, analyze the view angle vector, and determine the range of video which can be watched in the view angle range, thereby determining the required veneering identification of several paths of video data.
The analyzing the view angle vector, determining at least two path of veneer identifications, includes: mapping the view angle vector and determining corresponding veneering parameters; and determining at least two paths of veneering marks according to the veneering parameters. Mapping the view angle vector, mapping the view angle vector into a vector of world coordinates, and mapping the view angle vector with a point C where the image acquisition equipment is located as a starting point to determine the vector corresponding to the world coordinates, thereby determining the plane related to the vector, namely determining the plane of a view angle model where the view angle is located as a veneering parameter. The corresponding overlay identity may then be determined based on the overlay parameters, wherein the user's perspective will typically correspond to 2-3 facets, so 2-3 overlay identities may be obtained.
A schematic diagram of a view model process is shown in fig. 2. If the view vector is 1 corresponding to the view 1, two paths of overlay marks, namely Left and Back, can be determined. If the view angle is the view angle vector 2 corresponding to the view angle 2, three paths of overlay marks, namely Right, back, bottom, can be determined.
Step 114, the server obtains at least two paths of video data according to the at least two paths of overlay identifiers.
After the service end determines at least two paths of overlay marks based on the visual angle, corresponding paths of video data can be determined according to each overlay mark, so that at least two paths of video data are obtained.
Step 116, the server sends the at least two paths of video data.
The server side transmits the at least two paths of video data through at least two audio and video stream transmission channels, wherein each path of video data is transmitted through one audio and video stream transmission channel.
And 118, the client performs mapping processing on the at least two paths of video data according to the view angle model to obtain the video data which are correspondingly output.
A view model, such as a cube sky box model, is implemented at the client, and corresponding rendering processing of the video data is also performed at the client. After receiving at least two paths of video data sent by the server, the at least two paths of video data can be subjected to mapping processing according to the view angle model, so that each path of video data is mapped to a corresponding surface of the view angle model, and then rendering processing is performed to obtain output video data.
In an optional embodiment, the mapping the at least two paths of video data according to the view angle model to obtain the video data corresponding to the output includes: analyzing the at least two paths of video data to obtain at least two paths of analyzed video data; and performing map rendering on at least two paths of analysis video data according to the view angle model to obtain video data which are correspondingly output. And the client analyzes at least two paths of video data, such as video decoding and the like, to obtain at least two paths of analyzed video data. And then determining the surface of each path of analysis video data according to the view angle model, mapping the analysis video data to the surface for mapping, and then performing rendering processing based on the view angle model to obtain the video data which is correspondingly output.
The rendering of the mapping is performed on at least two paths of resolved video data according to the view angle model to obtain video data corresponding to the output, and the rendering comprises the following steps: determining the veneering identification corresponding to each path of video data; determining the veneering position of the corresponding path analysis video data according to the veneering identification; and rendering the at least two paths of analysis video data as textures corresponding to the veneering positions in the visual angle model to obtain corresponding output video data. The overlay identification corresponding to each path of video data can be determined, so that the surface, namely the overlay position, of each path of analytic video data in the view angle model is determined based on the overlay identification. And mapping at least two paths of analysis video data to corresponding veneering positions in the visual angle model, processing the analysis video data as textures of the veneering positions, and rendering the video data based on the visual angle model to obtain output video data.
After mapping the parsed video data to the corresponding surface of the view angle model, the video data can be processed as textures of the corresponding surface, and processing such as filtering can be performed, and then the whole view angle model is rendered to obtain the output video data.
Step 120, the client plays the output video data on the video page.
The client can play the output video data in the video page, so that the corresponding video data can be fed back, rendered and output based on the real-time adjusted visual angle of the user. If the user continues to adjust the visual angle, the video data can still be watched normally in the visual angle adjustment of the user according to the processing steps, and no obvious gap appears.
In summary, a video page is provided, the video page is used for playing video data, a viewing angle adjustment instruction received by the video page is responded, a viewing angle vector is determined, a viewing angle adjustment request is generated based on the viewing angle vector, the viewing angle adjustment request is sent, then at least two paths of collected video data can be received, the video data are collected through a plurality of image collecting devices arranged at the same collecting point, the collecting angles of the plurality of image collecting devices are different, and each image collecting device correspondingly collects one path of video data. So that at least two paths of video data can be determined based on the viewing angle, so that a user can freely adjust the viewing angle for viewing. And then, mapping the at least two paths of video data according to the visual angle model to obtain the video data which are correspondingly output and play the video data, so that the rendering of the at least two paths of video data is realized at the client side, the dependence on hardware resources of the server side is reduced, and the user experience is improved.
On the basis of the above embodiments, the embodiments of the present application provide steps of processing on both sides of a client and a server, respectively. Video data is acquired through a plurality of image acquisition devices arranged at the same acquisition point, wherein the acquisition angles of the plurality of image acquisition devices are different, and each image acquisition device correspondingly acquires one path of video data.
Referring to fig. 3, a flowchart of steps on a client side of an embodiment of a video processing method of the present application is shown.
Step 302, a video page is provided, the video page being used to play video data.
Instep 304, a view vector is determined in response to a view adjustment instruction received by the video page.
And 306, generating a view angle adjustment request based on the view angle vector, and sending the view angle adjustment request.
Atstep 308, at least two paths of video data are received, the at least two paths of video data being determined based on the view angle adjustment request.
And 310, mapping the at least two paths of video data according to the view angle model to obtain the video data which are correspondingly output.
And step 312, playing the output video data on the video page.
Referring to fig. 4, a flowchart of steps on a server side of an embodiment of a video processing method of the present application is shown.
Step 402, a video page is provided, the video page being used to play video data.
Instep 404, a view adjustment request is received, the view adjustment request including a view vector, the view vector determined based on a view adjustment instruction received by a video page.
And 406, analyzing the view angle vector to determine at least two paths of veneer identifications.
And step 408, obtaining at least two paths of video data according to the at least two paths of veneer identifiers.
Step 410, the at least two paths of video data are sent, so that mapping processing is performed on the at least two paths of video data according to the view angle model at the client, and video data for adjusting the view angle are output.
When a user watches video data at a client, 2-3 paths of video data can be reasonably and simultaneously received according to a visual angle generated by user interaction, a visual angle model such as a space box is built through a processor such as a GPU (graphics processing unit) of the client to conduct 3D rendering, and a picture seen by a visual angle expected by the user is presented.
On the basis of the above embodiments, the embodiments of the present application provide steps of processing on two sides of a client and a server, respectively, and can combine channels to perform data transmission. The video data are collected through a plurality of image collecting devices arranged at the same collecting point, wherein the collecting angles of the image collecting devices are different, and each image collecting device correspondingly collects one path of video data. The acquisition angles of the plurality of image acquisition devices are connected but do not overlap, namely, the acquisition angles of adjacent image acquisition devices in the plurality of image acquisition devices are orthogonal.
Wherein the position and acquisition angle of the cameras are predetermined and the spatial geometrical relationship between the cameras is fixed. The visual angle model of the cubic sky box can adopt an orthogonal swinging method of 6 cameras, and the visual angle model is respectively aligned with the front direction, the back direction, the left direction, the right direction, the upper direction and the lower direction of a virtual cube taking the center of the scene as an origin, thus covering the whole scene and greatly reducing the number of cameras and the number of video streams. For example, in a 3D rendering program, more than 6 cameras (including) are applied to respectively obtain more than 6 paths of rendering pictures, and the 6 cameras (including) orthogonally cover each part of a scene, for example, the view angle model of the cubic space box is that the 6 cameras orthogonally cover the directions of the front, the back, the right, the left, the right, the upper and the lower directions in the scene respectively.
Referring to fig. 5, a flowchart of steps on a client side of an alternative embodiment of a video processing method of the present application is shown.
Step 502, connecting with a server, and establishing a data transmission channel and at least one audio/video stream transmission channel.
Atstep 504, a video page is provided.
Wherein the perspective model is initialized based on the rendering engine in advance. Determining a plurality of veneering positions corresponding to the visual angle model and setting corresponding veneering marks; and determining a texture filtering algorithm of the view angle model, and generating a corresponding view angle model based on the shader. Therefore, a visual angle model can be generated, 3D rendering is performed based on the visual angle model after the video data is received, and corresponding 3D video data is played in the video page. When the video data is initially played, multiple paths of video data can be obtained based on a default view angle and played after being rendered through a view angle model.
And step 506, responding to the visual angle adjustment instruction received by the video page, and acquiring visual angle adjustment information.
Wherein the step of receiving a viewing angle adjustment instruction in response to the video page comprises at least one of: receiving a visual angle adjustment instruction in response to the trigger of the visual angle adjustment control in the video page; and responding to the adjustment operation of the external equipment associated with the video page, and receiving a corresponding visual angle adjustment instruction.
The visual angle smooth transformation of the video scene can be realized by adjusting the keyboard, the mouse, the handle or the gyroscope for the user, the interaction instantaneity of the user is high, the feedback is accurate, the experience is natural, and the camera is like a very flexible camera.
Step 508, converting the view angle adjustment information to determine a corresponding view angle vector.
Step 510, generating a viewing angle adjustment request based on the viewing angle vector, and sending the viewing angle adjustment request through the data transmission channel.
Atstep 512, at least two paths of video data are received through the at least two audio/video streaming channels.
And step 514, analyzing the at least two paths of video data to obtain at least two paths of analyzed video data.
And step 516, performing map rendering on at least two paths of resolved video data according to the view angle model to obtain the video data correspondingly output.
The rendering of the mapping is performed on at least two paths of resolved video data according to the view angle model to obtain video data corresponding to the output, and the rendering comprises the following steps: determining the veneering identification corresponding to each path of video data; determining the veneering position of the corresponding path analysis video data according to the veneering identification; and rendering the at least two paths of analysis video data as textures corresponding to the veneering positions in the visual angle model to obtain corresponding output video data.
And step 518, playing the output video data on the video page.
Referring to fig. 6, a flowchart of steps on a server side of an alternative embodiment of a video processing method of the present application is shown.
Step 602, connecting with a client, and establishing a data transmission channel and at least one audio/video stream transmission channel.
Step 604, a video page is provided.
Step 606, receiving a viewing angle adjustment request through the data transmission channel.
And 608, mapping the view angle vector to determine corresponding veneering parameters.
Step 610, determining at least two paths of veneers according to the veneering parameters.
Step 612, obtaining at least two paths of video data according to the at least two paths of overlay identifiers.
Step 614, sending the at least two paths of video data through the at least two audio/video streaming channels.
Therefore, the 3D rendering capability of the GPU can be invoked to splice multiple paths of video streams, and the composite effect of the multiple paths of video streams is seamless by setting multiple paths of image acquisition equipment and acquisition scheduling, so that the user can watch smoothly at any view angle.
In the above embodiments, creation and rendering of the perspective model are taken as examples on the client side. In other examples, creation of the perspective model and rendering may also be performed at the server side. The video data are collected through a plurality of image collecting devices arranged at the same collecting point, wherein the collecting angles of the image collecting devices are different, and each image collecting device correspondingly collects one path of video data. The acquisition angles of adjacent image acquisition devices in the plurality of image acquisition devices are orthogonal.
Referring to fig. 7, a flowchart of steps on a client side of another video processing method embodiment of the present application is shown.
Step 702, a video page is provided, the video page being used to play video data.
The method comprises the steps of connecting with a server side, and establishing a data transmission channel and an audio and video stream transmission channel.
Instep 704, a view vector is determined in response to a view adjustment instruction received by the video page.
The determining a view vector in response to a view adjustment instruction received by the video page includes: responding to the visual angle adjustment instruction received by the video page, and acquiring visual angle adjustment information; and converting the visual angle adjustment information to determine a corresponding visual angle vector. The step of receiving a viewing angle adjustment instruction in response to the video page includes at least one of: receiving a visual angle adjustment instruction in response to the trigger of the visual angle adjustment control in the video page; and responding to the adjustment operation of the external equipment associated with the video page, and receiving a corresponding visual angle adjustment instruction.
Step 706, generating a view angle adjustment request based on the view angle vector, sending the view angle adjustment request to determine at least two paths of video data according to the view angle vector, and mapping the at least two paths of video data according to a view angle model to obtain the video data with the view angle adjusted.
And sending the view angle adjustment request through the data transmission channel.
Atstep 708, video data of a view angle adjustment is received.
And receiving video data through the audio and video streaming transmission channel.
And step 710, playing the video data with the adjusted visual angle on the video page.
Referring to fig. 8, a flowchart of steps on a server side of another video processing method embodiment of the present application is shown.
Step 802, a video page is provided, the video page being used to play video data.
And the client is connected with the client and establishes a data transmission channel and at least one audio/video stream transmission channel.
A perspective model may also be created based on the rendering engine, including: initializing a view angle model based on a rendering engine in advance; determining a plurality of veneering positions corresponding to the visual angle model and setting corresponding veneering marks; and determining a texture filtering algorithm of the view angle model, and generating a corresponding view angle model based on the shader.
Instep 804, a view angle adjustment request is received, the view angle adjustment request including a view angle vector, the view angle vector determined based on a view angle adjustment instruction received by the video page.
And receiving a viewing angle adjustment request through the data transmission channel.
And step 806, analyzing the view angle vector to determine at least two path overlay identifiers.
The analyzing the view angle vector to determine at least two paths of veneer identifications comprises the following steps: mapping the view angle vector and determining corresponding veneering parameters; and determining at least two paths of veneering marks according to the veneering parameters.
Step 808, obtaining at least two paths of video data according to the at least two paths of overlay identifiers.
And 810, mapping the at least two paths of video data according to the view angle model to obtain video data with the adjusted view angle.
The mapping processing is performed on the at least two paths of video data according to the view angle model to obtain the video data correspondingly output, and the mapping processing comprises the following steps: analyzing the at least two paths of video data to obtain at least two paths of analyzed video data; and performing map rendering on at least two paths of analysis video data according to the view angle model to obtain video data which are correspondingly output.
Performing map rendering on at least two paths of resolved video data according to the view angle model to obtain video data corresponding to the output, wherein the map rendering comprises the following steps: determining the veneering identification corresponding to each path of video data; determining the veneering position of the corresponding path analysis video data according to the veneering identification; and rendering the at least two paths of analysis video data as textures corresponding to the veneering positions in the visual angle model to obtain corresponding output video data.
Step 812, sending the video data with the adjusted viewing angle, so as to output the video data with the adjusted viewing angle at the client.
And sending the video data through the audio and video stream transmission channel.
After the viewing angle is adjusted, at least two paths of video data can be determined based on the adjusted viewing angle, then 3D processing such as mapping and rendering is carried out on the server side, 3D video data with the adjusted viewing angle is obtained, and the 3D video data is transmitted to the client side for playing. Rendering can be performed by using hardware of the server side, and the corresponding video data can be multiplexed for users having the same viewing angle.
On the basis of the above embodiments, the embodiments further provide a video processing method capable of rendering multiple paths of video data based on a viewing angle, so as to provide video data conforming to a viewing angle of a user. The video data are collected through a plurality of image collecting devices arranged at the same collecting point, wherein the collecting angles of the image collecting devices are different, and each image collecting device correspondingly collects one path of video data. The acquisition angles of adjacent image acquisition devices in the plurality of image acquisition devices are orthogonal.
Referring to fig. 9, a flowchart of steps of another video processing method embodiment of the present application is shown.
In step 902, a view vector is determined in response to a view adjustment instruction received by a video page.
The determining a view vector in response to a view adjustment instruction received by the video page includes: responding to the visual angle adjustment instruction received by the video page, and acquiring visual angle adjustment information; and converting the visual angle adjustment information to determine a corresponding visual angle vector. The step of receiving a viewing angle adjustment instruction in response to the video page includes at least one of: receiving a visual angle adjustment instruction in response to the trigger of the visual angle adjustment control in the video page; and responding to the adjustment operation of the external equipment associated with the video page, and receiving a corresponding visual angle adjustment instruction.
And step 904, analyzing the view angle vector to determine at least two paths of veneer identifications.
The analyzing the view angle vector to determine at least two paths of veneer identifications comprises the following steps: mapping the view angle vector and determining corresponding veneering parameters; and determining at least two paths of veneering marks according to the veneering parameters.
Step 906, obtaining at least two paths of video data according to the at least two paths of overlay identifiers.
And 808, mapping the at least two paths of video data according to the view angle model to obtain video data with the view angle adjusted so as to output the video data with the view angle adjusted on a video page.
The mapping processing is performed on the at least two paths of video data according to the view angle model to obtain the video data correspondingly output, and the mapping processing comprises the following steps: analyzing the at least two paths of video data to obtain at least two paths of analyzed video data; and performing map rendering on at least two paths of analysis video data according to the view angle model to obtain video data which are correspondingly output.
Performing map rendering on at least two paths of resolved video data according to the view angle model to obtain video data corresponding to the output, wherein the map rendering comprises the following steps: determining the veneering identification corresponding to each path of video data; determining the veneering position of the corresponding path analysis video data according to the veneering identification; and rendering the at least two paths of analysis video data as textures corresponding to the veneering positions in the visual angle model to obtain corresponding output video data.
Therefore, the 3D rendering capability of the GPU can be invoked to splice multiple paths of video streams, and the composite effect of the multiple paths of video streams is seamless by setting multiple paths of image acquisition equipment and acquisition scheduling, so that the user can watch smoothly at any view angle.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments and that the acts referred to are not necessarily required by the embodiments of the present application.
On the basis of the embodiment, video data are collected through a plurality of image collecting devices arranged at the same collecting point, wherein collecting angles of the image collecting devices are different, and each image collecting device correspondingly collects one path of video data. The embodiment also provides a video processing device, which is applied to the electronic equipment of the client, and the device comprises:
the video page module is used for providing a video page, and the video page is used for playing video data;
the visual angle adjusting module is used for responding to the visual angle adjusting instruction received by the video page and determining a visual angle vector;
a request model for generating a view angle adjustment request based on the view angle vector, and transmitting the view angle adjustment request;
the video receiving module is used for receiving at least two paths of video data, and the at least two paths of video data are determined based on the visual angle adjustment request;
the mapping rendering module is used for mapping the at least two paths of video data according to the view angle model to obtain corresponding output video data;
and the playing module is used for playing the output video data on the video page.
In summary, a video page is provided, the video page is used for playing video data, a viewing angle adjustment instruction received by the video page is responded, a viewing angle vector is determined, a viewing angle adjustment request is generated based on the viewing angle vector, the viewing angle adjustment request is sent, then at least two paths of collected video data can be received, the video data are collected through a plurality of image collecting devices arranged at the same collecting point, the collecting angles of the plurality of image collecting devices are different, and each image collecting device correspondingly collects one path of video data. So that at least two paths of video data can be determined based on the viewing angle, so that a user can freely adjust the viewing angle for viewing. And then, mapping the at least two paths of video data according to the visual angle model to obtain the video data which are correspondingly output and play the video data, so that the rendering of the at least two paths of video data is realized at the client side, the dependence on hardware resources of the server side is reduced, and the user experience is improved.
Wherein, the collection angles of the adjacent image collection devices in the plurality of image collection devices are orthogonal.
The visual angle adjusting module is used for responding to the visual angle adjusting instruction received by the video page and acquiring visual angle adjusting information; and converting the visual angle adjustment information to determine a corresponding visual angle vector.
The visual angle adjusting module is used for responding to the trigger of the visual angle adjusting control in the video page and receiving a visual angle adjusting instruction; and receiving a corresponding viewing angle adjustment instruction in response to an adjustment operation of an external device associated with the video page.
The map rendering module is used for resolving the at least two paths of video data to obtain at least two paths of resolved video data; and performing map rendering on at least two paths of analysis video data according to the view angle model to obtain video data which are correspondingly output.
The map rendering module is used for determining a facing identifier corresponding to each path of video data; determining the veneering position of the corresponding path analysis video data according to the veneering identification; and rendering the at least two paths of analysis video data as textures corresponding to the veneering positions in the visual angle model to obtain corresponding output video data.
Further comprises: a perspective model creation module for creating a perspective model in advance based on a rendering engine, the perspective model comprising at least one of: a cube model, a sphere model.
The view model creation module is used for initializing a view model based on a rendering engine in advance; determining a plurality of veneering positions corresponding to the visual angle model and setting corresponding veneering marks; and determining a texture filtering algorithm of the view angle model, and generating a corresponding view angle model based on the shader.
Further comprises: the connection module is used for connecting with the server and establishing a data transmission channel and at least one audio/video stream transmission channel;
the request model is used for sending the visual angle adjustment request through the data transmission channel;
the video receiving module is used for receiving at least two paths of video data through the at least two audio and video stream transmission channels.
On the basis of the embodiment, video data are collected through a plurality of image collecting devices arranged at the same collecting point, wherein collecting angles of the image collecting devices are different, and each image collecting device correspondingly collects one path of video data. The embodiment also provides a video processing device, which is applied to an electronic device of a server, and the device comprises:
The video providing module is used for providing a video page, and the video page is used for playing video data;
the request receiving module is used for receiving a view angle adjustment request, wherein the view angle adjustment request comprises a view angle vector, and the view angle vector is determined based on a view angle adjustment instruction received by a video page;
the video visual angle analysis module is used for analyzing the visual angle vector and determining at least two paths of veneer identifications; acquiring at least two paths of video data according to the at least two paths of veneer identifiers;
and the video feedback module is used for sending the at least two paths of video data, carrying out mapping processing on the at least two paths of video data according to the visual angle model at the client, and outputting video data for adjusting the visual angle.
The video visual angle analysis module is used for mapping the visual angle vector and determining corresponding veneering parameters; and determining at least two paths of veneering marks according to the veneering parameters.
Further comprises: the client connection module is used for connecting with a client and establishing a data transmission channel and at least one audio and video stream transmission channel;
the request receiving module is used for receiving a viewing angle adjustment request through the data transmission channel;
the video feedback module is used for sending the at least two paths of video data through the at least two audio/video stream transmission channels.
Wherein, the collection angle of adjacent image acquisition equipment in a plurality of image acquisition equipment is orthogonal, still includes: the video acquisition module is used for respectively receiving video data from a plurality of image acquisition devices; determining a veneering mark according to the acquisition angle of the image acquisition equipment; and establishing a corresponding relation between the video data and the overlay mark.
In the embodiment of the application, the position and the acquisition angle of the camera are preset, and the space geometrical relationship between the cameras is fixed. The visual angle model of the cubic sky box can adopt an orthogonal swinging method of 6 cameras, and the visual angle model is respectively aligned with the front direction, the back direction, the left direction, the right direction, the upper direction and the lower direction of a virtual cube taking the center of the scene as an origin, thus covering the whole scene and greatly reducing the number of cameras and the number of video streams. For example, in a 3D rendering program, more than 6 cameras (including) are applied to respectively obtain more than 6 paths of rendering pictures, and the 6 cameras (including) orthogonally cover each part of a scene, for example, the view angle model of the cubic space box is that the 6 cameras orthogonally cover the directions of the front, the back, the right, the left, the right, the upper and the lower directions in the scene respectively.
The 3D rendering capability of the GPU can be called to splice multiple paths of video streams, and the multiple paths of image acquisition equipment and the acquisition schedule are set, so that the composite effect of the multiple paths of video streams is seamless, and the user can watch smoothly at any view angle.
And moreover, the visual angle smooth transformation of the video scene can be realized through the adjustment of a keyboard, a mouse, a handle or a gyroscope for a user, the interaction instantaneity of the user is high, the feedback is accurate, and the experience is natural, just like the situation that the user monopolizes a very flexible camera.
The embodiment of the application also provides a non-volatile readable storage medium, where one or more modules (programs) are stored, where the one or more modules are applied to a device, and the device may be caused to execute instructions (instractions) of each method step in the embodiment of the application.
Embodiments of the present application provide one or more machine-readable media having instructions stored thereon that, when executed by one or more processors, cause an electronic device to perform a method as described in one or more of the above embodiments. In this embodiment of the present application, the electronic device includes a server, a terminal device, and other devices.
Embodiments of the present disclosure may be implemented as an apparatus for performing a desired configuration using any suitable hardware, firmware, software, or any combination thereof, which may include a server (cluster), terminal, or the like. Fig. 10 schematically illustrates anexample apparatus 1000 that may be used to implement various embodiments described herein.
For one embodiment, fig. 10 illustrates anexample apparatus 1000 having one ormore processors 1002, a control module (chipset) 1004 coupled to at least one of the processor(s) 1002, a memory 1006 coupled to thecontrol module 1004, a non-volatile memory (NVM)/storage 1008 coupled to thecontrol module 1004, one or more input/output devices 1010 coupled to thecontrol module 1004, and anetwork interface 1012 coupled to thecontrol module 1004.
Theprocessor 1002 may include one or more single-core or multi-core processors, and theprocessor 1002 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, theapparatus 1000 can be used as a server, a terminal, or the like in the embodiments of the present application.
In some embodiments, theapparatus 1000 can include one or more computer-readable media (e.g., memory 1006 or NVM/storage 1008) having instructions 1014 and one ormore processors 1002 in combination with the one or more computer-readable media configured to execute the instructions 1014 to implement the modules to perform the actions described in this disclosure.
For one embodiment, thecontrol module 1004 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 1002 and/or any suitable device or component in communication with thecontrol module 1004.
Thecontrol module 1004 may include a memory controller module to provide an interface to the memory 1006. The memory controller modules may be hardware modules, software modules, and/or firmware modules.
Memory 1006 may be used to load and store data and/or instructions 1014 fordevice 1000, for example. For one embodiment, the memory 1006 may include any suitable volatile memory, such as a suitable DRAM. In some embodiments, the memory 1006 may comprise a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, thecontrol module 1004 may include one or more input/output controllers to provide an interface to the NVM/storage 1008 and the input/output device(s) 1010.
For example, NVM/storage 1008 may be used to store data and/or instructions 1014. NVM/storage 1008 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1008 may include storage resources as part of a device on whichapparatus 1000 is installed, or may be accessible by the device without necessarily being part of the device. For example, NVM/storage 1008 may be accessed over a network via input/output device(s) 1010.
Input/output device(s) 1010 may provide an interface forapparatus 1000 to communicate with any other suitable device, input/output device 1010 may include communication components, audio components, sensor components, and the like.Network interface 1012 may provide an interface fordevice 1000 to communicate over one or more networks, anddevice 1000 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as accessing a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, etc., or a combination thereof.
For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic of one or more controllers (e.g., memory controller modules) of thecontrol module 1004. For one embodiment, at least one of the processor(s) 1002 may be packaged together with logic of one or more controllers of thecontrol module 1004 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1002 may be integrated on the same mold as logic of one or more controllers of thecontrol module 1004. For one embodiment, at least one of the processor(s) 1002 may be integrated on the same die with logic of one or more controllers of thecontrol module 1004 to form a system on chip (SoC).
In various embodiments, theapparatus 1000 may be, but is not limited to being: a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.), among other terminal devices. In various embodiments,device 1000 may have more or fewer components and/or different architectures. For example, in some embodiments, theapparatus 1000 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and a speaker.
The detection device can adopt a main control chip as a processor or a control module, sensor data, position information and the like are stored in a memory or an NVM/storage device, a sensor group can be used as an input/output device, and a communication interface can comprise a network interface.
The embodiment of the application also provides electronic equipment, which comprises: a processor; and a memory having executable code stored thereon that, when executed, causes the processor to perform a method as described in one or more of the embodiments herein. The memory in the embodiment of the application can store various data, such as various data including target files, file and application related data, and the like, and also can include user behavior data and the like, so as to provide a data base for various processes.
Embodiments also provide one or more machine-readable media having executable code stored thereon that, when executed, cause a processor to perform a method as described in one or more of the embodiments of the present application.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing has described in detail a video processing method, an electronic device and a storage medium provided by the present application, and specific examples have been applied herein to illustrate the principles and embodiments of the present application, and the above examples are only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (14)

CN202211435822.3A2022-11-162022-11-16Video processing method, electronic device and storage mediumPendingCN116208725A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211435822.3ACN116208725A (en)2022-11-162022-11-16Video processing method, electronic device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211435822.3ACN116208725A (en)2022-11-162022-11-16Video processing method, electronic device and storage medium

Publications (1)

Publication NumberPublication Date
CN116208725Atrue CN116208725A (en)2023-06-02

Family

ID=86510205

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211435822.3APendingCN116208725A (en)2022-11-162022-11-16Video processing method, electronic device and storage medium

Country Status (1)

CountryLink
CN (1)CN116208725A (en)

Similar Documents

PublicationPublication DateTitle
US9271025B2 (en)System and method for sharing virtual and augmented reality scenes between users and viewers
CN110869980B (en)Distributing and rendering content as a spherical video and 3D portfolio
US20240296626A1 (en)Method, apparatus, electronic device and storage medium for reconstructing 3d images
CN111414225B (en)Three-dimensional model remote display method, first terminal, electronic device and storage medium
US10638151B2 (en)Video encoding methods and systems for color and depth data representative of a virtual reality scene
US10321109B1 (en)Large volume video data transfer over limited capacity bus
US8878897B2 (en)Systems and methods for sharing conversion data
US20130321586A1 (en)Cloud based free viewpoint video streaming
CN110728755B (en)Method and system for roaming among scenes, model topology creation and scene switching
WO2022191070A1 (en)3d object streaming method, device, and program
JP7447266B2 (en) View encoding and decoding for volumetric image data
CN110663067B (en)Method and system for generating virtualized projections of customized views of real world scenes for inclusion in virtual reality media content
JP2018026064A (en)Image processor, image processing method, system
US20170150212A1 (en)Method and electronic device for adjusting video
US11430178B2 (en)Three-dimensional video processing
CN114358112B (en) Video fusion method, computer program product, client and storage medium
Zerman et al.User behaviour analysis of volumetric video in augmented reality
EP3564905A1 (en)Conversion of a volumetric object in a 3d scene into a simpler representation model
WO2022024780A1 (en)Information processing device, information processing method, video distribution method, and information processing system
CN115802076A (en) A 3D model distributed cloud rendering method, system and electronic equipment
US20190295324A1 (en)Optimized content sharing interaction using a mixed reality environment
CN110837297B (en)Information processing method and AR equipment
CN104601950B (en)A kind of video frequency monitoring method
CN116208725A (en)Video processing method, electronic device and storage medium
CN118694910B (en)Video fusion method, device, system, equipment and medium for three-dimensional scene

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp