Disclosure of Invention
The invention aims to provide a special effect image generation method, which is used for realizing dynamic effect addition of data received from external equipment, reducing transmission power consumption and reducing the data volume needing to be transmitted.
To achieve the above object, a first aspect of the present invention provides a special effect image generation method, including:
firstly, receiving special effect image generation data sent by second equipment by first equipment, wherein the special effect image generation data comprises at least one view to be processed, special effect material data and special effect attribute information, and the special effect attribute information comprises position information of the special effect material in a special effect image;
and step two, the first equipment fuses the special effect material data with the first view and the second view corresponding to the at least one view to be processed according to the special effect attribute information to generate a view with a 3D special effect, wherein the same material corresponding to the special effect material data has parallax error in a horizontal position after being fused with the first view and the second view.
According to the method of the preferred embodiment of the present invention, the special effect image generation data includes a 2D view to be processed, and the special effect attribute information includes the number of frames of the view with 3D special effect that needs to be generated, and the second step further includes:
step 2.1, copying the 2D view to be processed into two same views which are used as a first view and a second view;
and 2.2, synthesizing the special effect material with the size corresponding to each frame view and the first view and the second view corresponding to the frame view at corresponding positions according to the position information corresponding to each frame view in the special effect attribute information, wherein the synthesis comprises pixel data change, and the pixel data change specifically comprises the step of replacing the pixel data at the corresponding position in each view with the pixel of the special effect material data.
According to the method of the preferred embodiment of the present invention, the special effect image generation data further includes frame data, and before the step 2.2, the method further includes:
and adding a frame for the fused first view or second view according to the frame data.
According to the method of the preferred embodiment of the present invention, the special effect image generation data includes a 2D video sequence, the 2D video sequence includes a plurality of frames of images to be processed, and the images to be processed are 2D images; the second step further comprises:
step 2.1, sequentially intercepting each frame of image to be processed in the 2D video sequence, and processing each frame of image to be processed into a first view and a second view which are the same;
and 2.2, fusing the special effect material data with the first view and the second view corresponding to each frame of image to be processed according to the special effect attribute information to obtain a view which corresponds to each frame of image to be processed in the 2D video sequence and has a 3D special effect.
According to the method of the preferred embodiment of the present invention, the step 2.2 further comprises:
and synthesizing the special effect material with the corresponding size of each frame view and each view at the corresponding position according to the position information corresponding to each frame view in the special effect attribute information, wherein the synthesis comprises pixel data modification.
In the method according to the preferred embodiment of the present invention, the step 2.2, before, further includes:
and adding a frame for the first view or the second view corresponding to each frame of the image to be processed before fusion.
According to the method of the preferred embodiment of the present invention, the special effect image generation data includes a 3D to-be-processed image, the 3D to-be-processed image further includes a first view and a second view with parallax, the special effect attribute information further includes the number of frames of the view with 3D special effect to be generated, and the second step further includes:
and synthesizing the special effect material and the first view and the second view at corresponding positions respectively according to the position information, wherein the synthesis comprises the change of pixel data of a corresponding position area, the positions of the special effect material in the first view and the second view correspond to the parallax of the first view and the second view, and the change of the pixel data specifically comprises the replacement of the pixel data of the corresponding position in each view into the pixel of the special effect material data.
According to the method of the preferred embodiment of the present invention, before the second step, the method further includes:
and adding a frame to the first view or the second view before being fused with the special effect material in each frame of the 3D image.
According to the method of the preferred embodiment of the present invention, the special effect image generation data includes a 3D video sequence, the 3D video sequence includes a plurality of frames of 3D images to be processed, each frame of 3D image to be processed further includes a first view and a second view with parallax, the position information included in the special effect attribute includes special effect material and a corresponding position in each sub-view in each frame of 3D image to be processed, and the second step further includes:
and according to the position information, respectively synthesizing the special effect material with a first view and a second view in each frame of 3D image to be processed at corresponding positions, wherein the synthesis comprises pixel data change of corresponding position areas, the positions of the special effect material in the first view and the second view correspond to the parallax of the first view and the second view, and the pixel data change specifically comprises replacing the pixel data of the corresponding position in each view with the pixel of the special effect material data.
According to the method of the preferred embodiment of the present invention, before the second step, the method further includes:
and adding a frame to the first view or the second view before being fused with the special effect material in each frame of the 3D image.
In a second aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes:
the receiving unit is used for receiving special effect image generation data sent by second equipment, wherein the special effect image generation data comprises at least one view to be processed, special effect material data and special effect attribute information, and the special effect attribute information comprises position information of the special effect material in a special effect image;
and the processing unit is used for fusing the special effect material data with the first view and the second view corresponding to the at least one view to be processed according to the special effect attribute information to generate a view with a 3D special effect, wherein the same material corresponding to the special effect material data has parallax error in a horizontal position after being fused with the first view and the second view.
According to the special effect image generation method and the electronic equipment provided by the embodiment of the invention, the special effect image which can be generated by other equipment can be restored in time according to the special effect image generation data transmitted by other equipment, and the transmission of the complete image file with the 3D special effect generated by other equipment is not required, so that the transmission power consumption can be greatly reduced, the transmission bandwidth can be saved, a user can obtain a better 3D viewing effect, and the user requirements can be met.
Detailed Description
In order to clearly understand the above objects, features and advantages of the present invention, the following detailed description of the technical solution of the present invention is provided by the accompanying drawings and embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without causing any conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below. In the embodiments of the present invention, for convenience of distinction, terms such as "first" and "second" are used to distinguish different technical features, and such terms may be replaced as necessary, and should not be construed as limiting the present invention.
The method for generating a special effect image provided by the embodiment of the present invention may be applied to the field of stereo photography or other image processing, for example, to a smart phone, a tablet computer, and the like having a photographing function, and preferably may be applied to a portable electronic device, such as a 3D phone, a tablet computer, and the like, which has a light splitting device, such as a grating, and the like, but is not limited thereto, and the method for generating a special effect image may also be applied to a general 2D phone. To obtain a view with 3D dynamic effects that can be viewed in a smartphone like that shown in fig. 1. For example, after a user shoots a 2D video through a smart phone, the user wants to beautify the image and increase a dynamic 3D effect, for example, increase dynamic effects such as floating feathers, snowflakes, bubbles, raindrops, and the like, and then the effect that the user wants to obtain can be obtained through the video processing method provided by this embodiment.
More specifically, after a first device, for example, the device a in fig. 1, obtains an image obtained by itself, for example, by shooting or downloading, and adds a dynamic 3D special effect to the image, and stores the produced image with the dynamic 3D special effect, it is desirable to share the image with a second device, for example, the device B, so that the device B can also view the image with the same 3D dynamic special effect, for example, a 2D picture with the 3D special effect, a 2D video with the 3D special effect, a 3D image with the 3D special effect, or a 3D video with the 3D special effect, at this time, the device a may select to transmit a generated complete file to the device B, so that the device B may directly play the received file. In addition, after the device a creates an image with 3D dynamic special effects, it can choose a storage mode, for example, only store the original image, the used special effects materials, and special effects attribute data, such as the position, size change information, frame number, etc. of the special effects materials, so that, during transmission, it can only transmit these useful data without transmitting the whole file, and after receiving, the device B restores the image according to these information.
For example, after the device a completes the production of the 2D image with the 3D dynamic special effect, only the 2D image to be processed is stored, and the special effect attribute information and the material information are packaged and then stored. The electronic equipment receiving the information can directly process the image according to the information in the received information to obtain the view with the same 3D dynamic effect as the sender, and the transmission data volume can be greatly saved because only the image to be processed and the corresponding dynamic special effect attribute information are transmitted.
For example, after the device a finishes producing a 2D video with a 3D dynamic special effect, only the video to be processed is stored, and after packing the special effect attribute information and the material information, the video is stored. The electronic device B receiving the information can directly process the image according to the information in the received information to obtain the view with the same 3D dynamic effect as the sender, and the transmission data volume can be greatly saved because only the 2D video to be processed and the corresponding dynamic special effect attribute information are transmitted.
Similarly, for example, after the device a completes the production of the 3D image with the 3D dynamic special effect, only the 3D image to be processed is stored, and the special effect attribute information and the material information are packaged and then stored. The electronic device B receiving the information can directly process the image according to the information in the received information to obtain the 3D view which has the same 3D dynamic effect as the sender, and the transmission data volume can be greatly saved because only the 3D image to be processed and the corresponding special effect attribute information are transmitted.
Similarly, for example, after the device a finishes making a 3D video with a 3D dynamic special effect, only the image to be processed is stored, and the dynamic special effect attribute information and the material information are stored after being packed. The electronic equipment receiving the information can directly process the image according to the information in the received information to obtain the 3D video with the same 3D dynamic effect as the sender, and the transmission data volume can be greatly saved because only the 3D video to be processed and the corresponding special effect attribute information are transmitted.
In view of this, a first aspect of the embodiment of the present invention provides a method for generating a special effect image, fig. 2 is a flowchart of the method provided in the embodiment, and as can be seen from fig. 2, the embodiment may include:
201, a first device receives special effect image generation data sent by a second device, wherein the special effect image generation data comprises at least one view to be processed, special effect material data and special effect attribute information, and the special effect attribute information comprises position information of the special effect material in a special effect image;
specifically, the first device may be the electronic device B in the application scene, and the view to be processed received by the first device may be an image only including one 2D view, a 2D video sequence including multiple 2D images, a frame of 3D image including two left and right images, or a 3D video including multiple 3D images, where the image files may be used as a background for increasing a 3D dynamic special effect.
For example, in an embodiment where the background image source is a single 2D picture or a 3D picture, the special effect attribute data needs to include position information of a material and also needs to include a frame number to be generated, and in an embodiment where the background image source is a 2D video or a 3D video, the special effect attribute data does not need to include the frame number to be generated, because the video sequence itself includes a certain number of image frame numbers, which is not described in detail.
202, the first device fuses the special effect material data with the first view and the second view corresponding to the at least one view to be processed according to the special effect attribute information to generate a view with a 3D special effect, wherein the same material corresponding to the special effect material data has parallax in a horizontal position after being fused with the first view and the second view.
And after receiving the data transmitted by the electronic equipment A, the electronic equipment B analyzes the data and respectively reads the image part to be processed, the special effect material part and the special effect attribute information part. Then, the image is restored based on the information, and the same image as the image with the 3D dynamic special effect created by the electronic device a is generated and played.
In an embodiment, if a background image source for adding a 3D dynamic special effect is a single 2D picture, the special effect image generation data includes a 2D view to be processed, the special effect attribute information includes a frame number of views with a 3D special effect that need to be generated, and the special effect image generation data may have a structure as in table 1 below:
| to-be-processed 2D picture | Material | Special effect properties |
TABLE 1
The second step further comprises:
step 2.1, copying the 2D view to be processed into two same views which are used as a first view and a second view;
for example, the B device copies the 2D pending picture into two identical views, namely a first view and a second view.
And 2.2, synthesizing the special effect material with the size corresponding to each frame view and the first view and the second view corresponding to the frame view at corresponding positions according to the position information corresponding to each frame view in the special effect attribute information, wherein the synthesis comprises pixel data change, and the pixel data change specifically comprises the step of replacing the pixel data at the corresponding position in each view with the pixel of the special effect material data.
Referring to fig. 3, it is a to-be-processed 2D view acquired by the electronic device in the embodiment of the present invention, the special effect material is snowflakes, and the electronic device B may increase a dynamic effect of 3D floating snowflakes for the view.
When the special effect material is fused with the view to be processed, a variety of ways can be implemented, for example, in an embodiment where a butterfly is selected as the special effect material, pixels in a certain position region of the picture can be replaced with pixel data corresponding to the butterfly material, and a desired synthesized view can be obtained. It should be noted that in a frame of 3D effect view, two views are usually included, namely the view with the first view added with the special effect material and the view with the second view added with the special effect material, and the same material when fused with the first view and the second view respectively usually has a parallax, which can be determined by position information, for example, the position row and column coordinates on the first view are (100), wherein the row coordinates and the column coordinates are both 100, and the position row and column coordinates on the second view are (102,100).
The synthesis position of the special effect material in each view in different frames can be random, can also be determined according to interaction with a user, or can be determined by a certain rule, such as a motion trajectory algorithm, and is not repeated.
Referring to fig. 3, the position of each of the snowflake special effect materials in each frame of image is different, but the positions of the snowflake special effect materials on the two views in each frame are offset, and the positions of snowflakes on the different frames are different, so that when the multi-frame views are played continuously, a dynamic effect can be observed, and particularly when the multi-frame views including two views added with special effect materials are played on a 3D device, a 3D effect can be observed due to the parallax of the special effect materials in the transverse direction, and although the background is a non-stereoscopic 2D image, a 3D special effect can still be observed, for example, as shown in fig. 1.
Through the above scheme, the 3D effect can be viewed on the 3D device, but for the 2D device, the 3D effect cannot be viewed, and only two continuous dynamic images with snowflakes can be viewed, similar to GIF dynamic images, for this reason, in order to enable the user to view the similar 3D effect on the 2D device, after step S203, that is, according to the special effect attribute information, the special effect material data is fused with the first view and the second view to obtain the view with the 3D special effect, before, the electronic device may further add a frame to the first view or the second view before fusion, more specifically, add a frame to one of the two views in each frame, and refer to fig. 4. When a plurality of frames of 2D views with frames are played continuously, a user can feel similar 3D dynamic effect, and therefore 2D and 3D compatible effect is achieved.
The frame information may also be packaged in the special-effect image generation data by the second electronic device a and transmitted to the first electronic device B, as shown in the following table, which is not described in detail.
| To be processed 2D | Picture frame | Material | Special effect properties | Rims |
TABLE 2
Correspondingly, in another embodiment, a 3D special effect background image source in the special effect image generation data received by the electronic device B is a 2D video sequence, that is, the special effect image generation data includes a 2D video sequence, the special effect image generation data may be shown in table 3, the 2D video sequence includes multiple frames of images to be processed, and the images to be processed are 2D images;
| 2D video | Material | Special effect properties |
TABLE 3
The second step further comprises:
step 3.1, sequentially intercepting each frame of image to be processed in the 2D video sequence, and processing each frame of image to be processed into a first view and a second view which are the same;
and 3.2, fusing the special effect material data with the first view and the second view corresponding to each frame of image to be processed according to the special effect attribute information to obtain a view which corresponds to each frame of image to be processed in the 2D video sequence and has a 3D special effect.
Still further, the step 3.2 may further include: and synthesizing the special effect material with the corresponding size of each frame view and each view at the corresponding position according to the position information corresponding to each frame view in the special effect attribute information, wherein the synthesis comprises pixel data modification.
In this embodiment, the processing manner of each frame of the to-be-processed image is the same as that of the foregoing embodiment, and therefore, the description thereof is omitted. For example, fig. 5 is a schematic diagram of generating a 3D effect from a 2D video sequence, where snowflakes are the material and joists and trains are the background in the 2D video.
Similar to the foregoing embodiment, in order to enable the user to view the 3D effect on the 2D device, after step 3.2, the electronic device B may further add a frame to the first view or the second view corresponding to each frame of the to-be-processed image before merging. Of course, the frame information is also carried in the special effect image generation data sent by the electronic device a, as shown in table 4, so that when multiple frames of 2D views with frames are played continuously, the user can feel a similar 3D dynamic effect, thereby achieving a 2D and 3D compatible effect. For example, as shown in a view of fig. 5, the effect of adding a frame can be as shown in fig. 6.
| 2D video | Material | Special effect properties | Rims |
TABLE 4
In still another embodiment, when the 3D special effect background image source in the special effect image generation data received by the electronic device B is a 3D image, that is, the special effect image generation data received by the first device includes a 3D to-be-processed image, the 3D to-be-processed image further includes a first view and a second view with parallax, and the special effect attribute information further includes the number of frames of views with 3D special effect to be generated, as shown in table 5, it is of course possible that the first view and the second view are separately stored:
| 3D image (first view + second view) | Material | Special effect properties |
TABLE 5
The second step further comprises:
and according to the position information, synthesizing the special effect material with a first view and a second view at corresponding positions respectively, wherein the synthesis comprises pixel data change of corresponding position areas, and the positions of the special effect material in the first view and the second view correspond to the parallax of the first view and the second view.
In this embodiment, the way each view is merged with the special effects material is similar to the previous embodiment, but the difference is that the position information is disparity dependent. Taking fig. 7 as an example, the upper diagram in fig. 7 is a 3D image received by the electronic device B, which includes two views with parallax, and the received material data is snowflakes, and after processing, a lower 3D playable image with a stereoscopic floating effect in fig. 7 can be generated, which is not described in detail.
Similarly, in order to enable the user to view the 3D effect on the 2D device, after step 3.2, the electronic device B may further add a frame to the first view or the second view of each frame of the 3D image before being fused with the special effect material. Certainly, the frame information is also carried in the special-effect image generation data sent by the electronic device a, and the table 6 may be referred to, so that when multiple frames of 2D views with frames are continuously played, the user can also feel a similar 3D dynamic effect, thereby achieving a 2D and 3D compatible effect. The effect is similar to that shown in fig. 4 in the foregoing embodiment, and is not repeated.
| 3D images | Material | Special effect properties | Rims |
TABLE 6
In yet another possible embodiment, when the 3D special effect background image source in the special effect image generation data received by the electronic device B is a 3D video, that is, the first device receives a 3D video sequence contained in the special effect image generation data, as shown in table 7, the 3D video sequence contains a plurality of frames of 3D images to be processed, each frame of 3D image to be processed further includes a first view and a second view with parallax, and the position information contained in the special effect attribute includes a special effect material and a corresponding position in each sub-view in each frame of 3D image to be processed.
| 3D video | Material | Special effect properties |
TABLE 7
The second step further comprises:
and according to the position information, respectively synthesizing the special effect materials with a first view and a second view in each frame of 3D image to be processed at corresponding positions, wherein the synthesis comprises pixel data change of corresponding position areas, and the positions of the special effect materials in the first view and the second view correspond to the parallax of the first view and the second view.
It should be noted that in this embodiment, the first view and the second view in each frame of the 3D video may be stored separately, that is, the 3D video sequence may include two sequences, one sequence stores the first view in each frame, the other sequence stores the second view in each frame, and the two sequences may be transmitted separately or selectively.
In this embodiment, the way in which each view in each frame of a 3D image in a 3D video sequence is fused with special effects material is similar to the previous embodiment, but the difference is that the position information is disparity dependent. Taking fig. 8 as an example, the material data received by the electronic device B is snowflakes, the background in the received 3D video is a running train, the train moves forward relative to the rail, and the object has parallax in two views in each frame, and after processing, a 3D playable video with a stereoscopic floating effect below fig. 8 can be generated, which is not repeated.
Similarly, in order to enable the user to view the 3D effect on the 2D device, after step 3.2, the electronic device B may further add a frame to the first view or the second view of each frame of the 3D image before being fused with the special effect material. Of course, the frame information is also carried in the special effect image generation data sent by the electronic device a, as shown in table 8, so that when multiple frames of 2D views with frames are played continuously, the user can feel a similar 3D dynamic effect, thereby achieving a 2D and 3D compatible effect. The effect can be seen in fig. 6, which is similar to that of fig. 6 and will not be described in detail.
| 3D images | Material | Special effect properties | Rims |
TABLE 8
Accordingly, as shown in fig. 9, an embodiment of the present invention provides an electronic device, which may be device B in fig. 1, where the electronic device includes:
a receivingunit 901, configured to receive special effect image generation data sent by a second device, where the special effect image generation data includes at least one view to be processed, special effect material data, and special effect attribute information, and the special effect attribute information includes position information of the special effect material in a special effect image;
aprocessing unit 902, configured to fuse, according to the special effect attribute information, the special effect material data with the first view and the second view corresponding to the at least one view to be processed, and generate a view with a 3D special effect, where a same material corresponding to the special effect material data has a parallax in a horizontal position after being fused with the first view and the second view.
This embodiment is an embodiment of an apparatus corresponding to the aforementioned embodiment of the method, and therefore will not be described in detail.
The electronic equipment provided by the embodiment of the invention can timely restore the special effect image which can be generated by other equipment according to the special effect image generation data transmitted by other equipment, does not need to transmit the complete image file with the 3D special effect generated by other equipment, can greatly reduce transmission power consumption and save transmission bandwidth, enables a user to obtain a better 3D viewing effect, and meets the user requirements.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.