Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Before explaining the embodiments of the present disclosure in further detail, terms and terminology involved in the embodiments of the present disclosure are explained, and the terms and terminology involved in the embodiments of the present disclosure are applicable to the following explanation.
In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
With the development of internet and terminal technologies, more and more application programs can add display special effects to pictures or videos, and the application programs can be picture application programs, video application programs or application programs capable of editing pictures or videos. The application program can be preset with a plurality of types of special effect images. When the special effect is added in the video, clicking to select a preset special effect image can add the special effect image into the video.
For example, a special effect image of the crown may be preset in the application program, and when the user clicks on an option of the crown, an image of the crown may be added to a head image of the user in the video. The special effect image of the shuttlecock can be preset in the application program, and when the user clicks the shuttlecock, the special effect of kicking the shuttlecock can be added for the user's foot in the video.
However, in the existing video special effect adding method, a user can only select from preset special effect images, and cannot edit the special effect images, so that the flexibility of adding and displaying special effects is poor.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. It should be noted that the same reference numerals in different drawings will be used to refer to the same elements already described.
Fig. 1 is a flowchart of a video special effect adding method in an embodiment of the disclosure, where the embodiment may be applicable to a case of adding a special effect in a video, the method may be performed by a video special effect adding device, and the video special effect adding device may be implemented in a software and/or hardware manner.
As shown in fig. 1, the video special effect adding method provided in the embodiment of the present disclosure mainly includes steps S110 to S120.
S110, acquiring a video to be processed.
In the embodiment of the disclosure, the video to be processed may be video data captured in real time by a camera device of the terminal, or may be a video clip stored locally in the terminal or downloaded from a network. In the embodiments of the present disclosure, the type and source of the video are not particularly limited.
In one application scenario of the present disclosure, in response to receiving a triggering operation of a user on a video interaction client, displaying an interface of the video interaction client, displaying a plurality of special effect setting controls in the interface of the client, in response to operating the special effect setting controls, obtaining set special effect parameters, in response to receiving a video shooting operation, starting to shoot a video, taking the shot video as a video to be processed, and adding special effect materials corresponding to the special effect parameters in the video to be processed.
In another application scenario of the present disclosure, in response to receiving an operation of a user on a video editing client, displaying an interface of the video editing client, in response to a video adding operation, adding a video from a local or database to a video editor in the video editing client, displaying a plurality of special effect setting controls in the interface of the video editor, in response to the operation of the special effect setting controls, obtaining set special effect parameters, and adding special effect materials corresponding to the special effect parameters in the added video.
And S120, responding to the setting operation of a user for the special effect parameters, and superposing and displaying the special effect materials corresponding to the special effect parameters in the video to be processed.
The special effect material moves in the video to be processed according to the set direction, the set direction has relevance with the image depth, and the special effect parameter is used for indicating the display attribute of the special effect material in the video to be processed.
In the embodiment of the disclosure, the special effect materials are displayed in the video to be processed according to the direction associated with the image depth. Specifically, the special effect material is displayed from the direction of the image depth value from large to small, in other words, as the video is played, the special effect material starts to be displayed from the position with the largest depth value and disappears at the position with the smallest depth value. Or, the special effect material is displayed from the direction of the image depth value from small to large, in other words, as the video is played, the special effect material starts to be displayed from the position with the minimum depth value and disappears at the position with the maximum depth value.
In one embodiment of the present disclosure, in response to a setting operation of a user for a special effect parameter, displaying, in a superimposed manner, special effect materials corresponding to the special effect parameter in the video to be processed, including: obtaining a video frame to be processed based on the video to be processed; determining a characteristic area related to a preset depth range on a video frame to be processed; setting the special effect parameters of the characteristic region; and based on the characteristic region and the special effect parameter, superposing and displaying the special effect material of the characteristic region and the video frame.
In one embodiment of the present disclosure, key frames in a video to be processed are acquired as video frames to be processed.
In one embodiment of the disclosure, frame extraction processing is performed on a video to be processed according to a preset frame extraction mode, so as to obtain at least one video frame to be processed. Further, the preset frame extraction mode may be to extract one of the video frames after a preset number of video frames are each interval as the image frame acquired in the embodiment of the present disclosure, where the preset number may be set according to an actual situation. In the embodiment of the present disclosure, the preset number is 3, that is, in the video data, one frame is extracted every 3 frames as a video frame to be processed. For example: the 1 st frame of video frame can be extracted, after 3 frames are spaced, the 5 th frame of video frame is extracted, after 3 frames are spaced, the 9 th frame of video frame is extracted, … …, and the frame extraction processing is carried out on the video according to the frame extraction mode until the frame extraction is finished.
In one embodiment of the present disclosure, the preset frame extraction method may further be to perform frame extraction processing on the video by using a deep network learning model. Specifically, a target object in the video is identified by using a deep network learning model, and when the object parameters of the target object meet the requirements, a video frame with the object parameters meeting the requirements is taken as an extracted to-be-processed video frame. Wherein the object parameter may be a motion state, a color value, a depth value, etc. Embodiments of the present disclosure are not particularly limited.
In the embodiment of the disclosure, a feature area related to a preset depth range is determined in a video frame to be processed, where an image frame may correspond to a preset depth range, for example: the image frames may correspond to a target depth of 20-30.
In the embodiment of the present disclosure, a correspondence relationship between the order of image frames and depths may be preset, and the corresponding target depths thereof may be sequentially acquired based on the order of the image frames. For example: the depth data list is sequentially: 10-20, 20-30, 30-40, 40-50. And extracting 4 video frames to be processed, wherein the target depth corresponding to the first video frame to be processed is 10-20, the target depth corresponding to the second video frame to be processed is 20-30, the target depth corresponding to the third video frame to be processed is 30-40, and the target depth corresponding to the fourth video frame to be processed is 40-50.
Setting corresponding special effect parameters for each characteristic region, displaying special effect materials corresponding to the special effect parameters on the characteristic region to obtain a special effect map, and superposing the special effect map in the video frame to be processed. Specifically, when the feature area is a line feature, adding a color value with a set value on the line feature to enable the feature area to be added with a light special effect to obtain a light special effect map, and superposing the light special effect map on the video frame to be processed to achieve displaying of the light special effect in the image frame. And when the characteristic region is an image point characteristic, adding a set color value on the image point to enable the particle effect to be added on the characteristic region, obtaining a particle effect map, and superposing the particle effect map on the video frame to be processed to display the particle effect in the image frame.
In the embodiment of the disclosure, as shown in fig. 2a and fig. 2b, in the video frame shown in fig. 2a, the light special effect is displayed in a superimposed manner in the area of the preset depth range corresponding to the video frame, and as the video is played, as shown in fig. 2b, in the next video frame, the light special effect is displayed in a superimposed manner in the area of the preset depth range corresponding to the next video frame. As shown in fig. 2a and fig. 2b, in the video playing process, since the preset depth range corresponding to each video frame is inconsistent, the light special effect can be changed along with the area corresponding to the depth range.
In the embodiment of the disclosure, as shown in fig. 2c and fig. 2d, the particle special effect is displayed in a superimposed manner in a region of a preset depth range corresponding to the video frame in the video frame shown in fig. 2c, and as the video is played, as shown in fig. 2d, the particle special effect is displayed in a superimposed manner in a region of a preset depth range corresponding to the next video frame in the next video frame. As shown in fig. 2c and fig. 2d, in the video playing process, since the preset depth range corresponding to each video frame is inconsistent, the particle special effect can be changed along with the area corresponding to the depth range.
In an embodiment of the present disclosure, determining a feature area on a video frame to be processed, which is related to the preset depth range, includes: setting at least one preset depth range sequence based on the time sequence; determining a preset depth range corresponding to the video frame to be processed based on the position of the video frame to be processed in the video sequence and the preset depth range sequence; and extracting a characteristic region related to the preset depth range on the video frame to be processed.
In the embodiment of the disclosure, a preset depth range sequence is determined according to a time sequence, wherein the preset depth range sequence is sequentially: 10-20, 20-30, 30-40, 40-50. The first video frame to be processed in the video sequence corresponds to a preset depth range of 10-20, the second video frame to be processed in the video sequence corresponds to a preset depth range of 20-30, the third video frame to be processed in the video sequence corresponds to a preset depth range of 30-40, and the fourth video frame to be processed in the video sequence corresponds to a preset depth range of 40-50.
In one embodiment of the present disclosure, determining a feature region on a video frame to be processed that is associated with the preset depth range includes: superposing and displaying characteristic areas related to a plurality of preset depth ranges on the video frame to be processed; setting the special effect parameters of the characteristic region, including: and setting the special effect parameters based on the characteristic areas related to each preset depth range.
In the embodiment of the disclosure, two or more feature areas may be determined in one video frame to be processed according to two or more preset depth ranges. In the embodiment of the present disclosure, a feature area determined by including two preset depth ranges in a video frame to be processed is described as an example. The special effect material displayed in the feature area corresponding to the first preset depth range may be set as a first special effect parameter, and the special effect material displayed in the feature area corresponding to the second preset depth range may be set as a second special effect parameter. The first effect parameter and the second effect parameter may be the same or different. Further, the first special effect parameter and the second special effect parameter may have the same part of parameters and different part of parameters. For example: the color values in the special effect parameters may be different and other special effect parameters than the color values may be the same.
In an embodiment of the present disclosure, two preset depth ranges corresponding to a video frame to be processed may partially overlap with two preset depth ranges of a previous video frame, and two preset depth ranges corresponding to a video frame to be processed may also partially overlap with two preset depth ranges of a next video frame. For example: the two preset depth ranges of the previous video frame are 10-20 and 20-30 respectively; the two preset depth ranges corresponding to the video frame to be processed are 20-30 and 30-40, and the two preset depth ranges of the next video frame are 30-40 and 40-50 respectively.
In the embodiment of the disclosure, the preset depth ranges of the two adjacent video frames are the same, and the special effect parameters are different. In the two video frames adjacent to each other, the special effect parameters corresponding to the preset depth range with smaller numerical range corresponding to each video frame are the same, and the special effect parameters corresponding to the preset depth range with larger numerical range corresponding to each video frame are the same. For example: the region with the preset depth range of 10-20 of the previous video frame is the same as the special effect parameter corresponding to the region with the preset depth range of 20-30 in the video frame to be processed, and the region with the preset depth range of 20-30 of the previous video frame is the same as the special effect parameter corresponding to the region with the preset depth range of 30-40 in the video frame to be processed.
In one embodiment of the disclosure, in a video to be processed, special effect materials are displayed in a region with an image depth value of 10-20 in a first frame of video frame, special effect materials are displayed in a region with an image depth value of 20-30 in a second frame of video frame, the display positions of the special effect materials in each video frame are sequentially set, and then an effect of adding dynamic materials in the video is achieved.
Furthermore, in the region with the same image depth value, the special effect materials are displayed simultaneously. For example: in one video frame, two areas with the image depth value of 10-20 are arranged, and the two areas are not adjacent, so that special effect materials are displayed in the two areas at the same time.
In one embodiment of the present disclosure, the display area of the special effect material in each video frame to be processed is determined by the image depth in the video frame, and the display effect or display attribute of the special effect material in each video frame is determined by special effect parameters set by a user.
In the embodiment of the present disclosure, the display attribute includes a style, a color, a glow level, and the like of the special effect material, and the setting of the special effect parameter is explained in detail in the following embodiment.
In one embodiment of the present disclosure, the special effects material includes line material and/or particle material; the line material is displayed at a line corresponding to a target object included in the video to be processed, the particle material is displayed in an area where the target object is located, and the target object is an object included in an image area corresponding to a target depth value in the video to be processed.
In the embodiment of the disclosure, the special effect material includes line material and/or particle material, only one of the line material or only one of the particle material can be selected, and the special effect material is added into the video to be processed, or both the line material and the particle material can be simultaneously selected and added into the video to be processed as the special effect material.
In the embodiment of the present disclosure, as shown in fig. 2a and 2b, a schematic view of a line material is shown, as shown in fig. 2c and 2d, a schematic view of a particle material is shown. The line material is depicted according to a target object line in the video frame, wherein the object line can be a contour line of an object or a contour line of the object. The particle material refers to filling the characteristic area corresponding to the target object in the video frame in the form of particles. As shown in fig. 2b, the interior of the object is filled in the form of particles, and an image filled with particle material is formed.
In one embodiment of the disclosure, in response to a setting operation for the color of the special effect material, a target color value is obtained, and the special effect material is controlled to be displayed according to the target color value.
In the embodiment of the disclosure, the color values of special effect materials (line materials and/or particle materials) may be set, and the setting mode may be that the user selects from a color plate, or the user directly inputs the corresponding color RGB values.
In the disclosed embodiment, the target color value may be a single color value, such as: a single value of red. The target color value may also be a gradient color value, such as: gradual change from yellow to green.
It should be noted that, when the target color value is a gradient color value, it means that the special effect material in the region where the special effect material needs to be added in one image frame is displayed in a gradient color. For example: in the cloud image as shown in fig. 2b, the particle material is displayed in a set gradient color from left to right. The gradient direction of the color can also be set by the user.
In one embodiment of the disclosure, in response to a setting operation for the glow level, a target glow level value is obtained, and the special effect material is controlled to be displayed according to the target glow level value.
In the implementation of the present disclosure, the glow degree is set, so that a glow effect can be generated on the special effect material, so as to improve the display effect of the material.
In one embodiment of the present disclosure, in response to a setting operation for a display speed, a target speed value is acquired, and a display speed of the special effect material in the video to be processed is controlled based on the target speed value.
In the embodiment of the present disclosure, the display speed is used to represent the time required for one special effect material to be displayed from the start to the disappearance. And determining which video frames are processed according to the display speed set by the user, so as to obtain the special effect material map.
In one embodiment of the present disclosure, in response to a setting operation for display brightness, a target brightness value is acquired, and the special effect material is controlled to be displayed according to the target brightness value.
In the embodiment of the disclosure, the brightness setting operation is used for setting the display brightness of the special effect material in the video to be processed.
In one embodiment of the disclosure, in response to a setting operation for a display width, a target width value is acquired, and a display width of the special effect material in the video to be processed is controlled.
In the embodiment of the disclosure, the setting operation of the display width is mainly used for indicating the display width of the special effect material in the video. The display width refers to the width of the special effect material in the same video frame. The larger the target width value is, the larger the display width of the special effect material in the video frame is, the smaller the target width value is, and the smaller the display width of the special effect material in the video frame is.
In one embodiment of the present disclosure, in response to a setting operation for a display direction, a target display direction is acquired, and a direction of the special effect material from appearance to disappearance in the video to be processed is controlled in accordance with the target display direction.
In the embodiment of the present disclosure, the setting operation of the display direction is mainly used to indicate the direction of the special effect material from appearance to disappearance in one feature area. The display direction of the special effect material in a characteristic area can be from left to right, from right to left, from top to bottom and from bottom to top. Either from center to perimeter or from perimeter to center. The display direction can be freely set.
In one embodiment of the disclosure, when the special effect material is a particle material, displaying a plurality of candidate particle patterns; responding to the selection operation of the particle pattern to be selected, and acquiring a target particle pattern; and superposing and displaying the particle materials in the video to be processed in the target particle style.
In the embodiment of the present disclosure, when the special effect material is a particle material, a particle style may be selected, and the particle style may be understood as a display appearance of each particle, that is, in what style each small square is displayed as illustrated in fig. 2 b. For example: the particle style may be alphabetical, heart, musical, square.
In the embodiment of the disclosure, various modes to be selected can be provided for free selection by a user.
In one embodiment of the present disclosure, when the special effect material is a particle material, a target particle density is acquired in response to a setting operation for a particle density, and the display density of the particle material within a set range is controlled based on the target particle density.
In the embodiment of the present disclosure, the particle density is mainly used to represent the spacing between the individual particles, and the larger the particle density, the smaller the spacing between the individual particles, and the smaller the particle density, the larger the spacing between the individual particles.
In one embodiment of the present disclosure, when the special effect material is a particle material, a target particle size is acquired in response to a setting operation for a particle size, and a display size of the particle material is controlled based on the target particle size.
In the disclosed embodiment, the particle size is mainly used to represent the size of the individual particles, i.e. the size of each small square as shown in fig. 2 b.
In one embodiment of the present disclosure, when the special effect material is a particle material, a plurality of candidate particle display states are displayed when the special effect material is a particle material; responding to the selection operation of the display state of the particles to be selected, and acquiring a display state of the target particles; and superposing and displaying the particle materials in the video to be processed in the target particle display state.
Wherein the particle display state may be understood as a dynamic effect of each particle display, such as: the display state may be a skip state, a blinking state, or the like. In the embodiment of the disclosure, the display state of the particles is preset, and the user can freely select the display state of the particles.
Fig. 3 is a schematic structural diagram of a video special effect adding device in the embodiment of the present disclosure, where the embodiment may be applicable to a case of adding a special effect in a video, and the video special effect adding device may be implemented in a software and/or hardware manner.
As shown in fig. 3, the video special effect adding apparatus provided in the embodiment of the present disclosure includes: a video acquisition module 31 and a special effects display module 32.
The video acquisition module 31 is configured to acquire a video to be processed; the special effect display module 32 is configured to respond to a setting operation of a user on special effect parameters, and display special effect materials corresponding to the special effect parameters in the video to be processed in a superimposed manner; the special effect material moves in the video to be processed according to the set direction, the set direction has relevance with the image depth, and the special effect parameter is used for indicating the display attribute of the special effect material in the video to be processed.
In one embodiment of the present disclosure, the special effects material includes line material and/or particle material; the line material is displayed at a line corresponding to a target object included in the video to be processed, the particle material is displayed in an area where the target object is located, and the target object is an object included in an image area corresponding to a target depth value in the video to be processed.
In one embodiment of the present disclosure, the special effects display module 32 includes: the video frame unit to be processed is used for obtaining a video frame to be processed based on the video to be processed; the characteristic region determining unit is used for determining a characteristic region related to a preset depth range on the video frame to be processed; a parameter setting unit, configured to perform a setting operation on the special effect parameter of the feature area; and the special effect superposition unit is used for superposing and displaying the special effect materials of the characteristic region and the video frame to be processed based on the characteristic region and the special effect parameters.
In one embodiment of the present disclosure, the feature region determining unit is specifically configured to set at least one preset depth range sequence based on the time sequence; determining a preset depth range corresponding to the video frame to be processed based on the position of the video frame to be processed in the video sequence and the preset depth range sequence; and extracting a characteristic region related to the preset depth range on the video frame to be processed.
In one embodiment of the present disclosure, a feature region determining unit is specifically configured to superimpose and display, on the video frame to be processed, feature regions related to a plurality of preset depth ranges; and the parameter setting unit is used for setting the special effect parameters based on the characteristic area related to each preset depth range.
In one embodiment of the present disclosure, the special effects display module 32 includes: the color control unit is used for responding to the setting operation of the color of the special effect material, obtaining a target color value and controlling the special effect material to be displayed according to the target color value; the glow control unit is used for responding to the setting operation of the glow degree, acquiring a target glow degree value and controlling the special effect material to be displayed according to the target glow degree value; the speed control unit is used for responding to the setting operation of the display speed, acquiring a target speed value and controlling the display speed of the special effect material in the video to be processed based on the target speed value; the brightness control unit is used for responding to the setting operation of the display brightness, acquiring a target brightness value and controlling the special effect material to be displayed according to the target brightness value; and the width control unit is used for responding to the setting operation of the display width, acquiring a target width value and controlling the display width of the special effect material in the video to be processed.
In one embodiment of the present disclosure, the special effects display module 32 includes: a material display direction control unit for acquiring a target display direction in response to a setting operation for the display direction; and controlling the direction from appearance to disappearance of the special effect materials in the video to be processed according to the target display direction.
In one embodiment of the present disclosure, the special effects display module 32 includes: a particle style selection unit, configured to respond to a selection operation for a to-be-selected particle style, and superimpose and display the particle material in the to-be-processed video in a particle style corresponding to the selection operation; a particle density setting unit configured to, when the special effect material is a particle material, obtain a target particle density in response to a setting operation for the particle density, and control a display density of the particle material within a set range based on the target particle density; a particle size setting unit configured to acquire a target particle size in response to a setting operation for a particle size, and control a display size of a particle material based on the target particle size; and the particle state setting unit is used for responding to the selection operation of the particle state to be selected and displaying the particle material in the video to be processed in a superposition manner according to the particle state corresponding to the selection operation.
The video special effect adding device provided by the embodiment of the disclosure can execute steps executed in the video special effect adding method provided by the embodiment of the disclosure, and has execution steps and beneficial effects, which are not described herein.
Fig. 4 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 4, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable terminal devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic apparatus 400 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403 to implement a video special effect adding method of an embodiment as described in the present disclosure. In the RAM403, various programs and data necessary for the operation of the terminal apparatus 400 are also stored. The processing device 401, the ROM402, and the RAM403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the terminal device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows a terminal device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program containing program code for performing the method shown in the flowchart, thereby implementing the video special effect addition method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the client, server, etc. may communicate using any currently known or future developed network protocol, such as HTTP (hypertext transfer protocol), etc., and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring a video to be processed; responding to setting operation of a user on special effect parameters, and superposing and displaying special effect materials corresponding to the special effect parameters in the video to be processed; the special effect material moves in the video to be processed according to the set direction, the set direction has relevance with the image depth value, and the special effect parameter is used for indicating the display attribute of the special effect material in the video to be processed.
Alternatively, the terminal device may perform other steps described in the above embodiments when the above one or more programs are executed by the terminal device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.