Movatterモバイル変換


[0]ホーム

URL:


CN113596436B - Video special effects testing method, device, computer equipment and storage medium - Google Patents

Video special effects testing method, device, computer equipment and storage medium
Download PDF

Info

Publication number
CN113596436B
CN113596436BCN202110095756.9ACN202110095756ACN113596436BCN 113596436 BCN113596436 BCN 113596436BCN 202110095756 ACN202110095756 ACN 202110095756ACN 113596436 BCN113596436 BCN 113596436B
Authority
CN
China
Prior art keywords
video
target
special effect
video frame
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110095756.9A
Other languages
Chinese (zh)
Other versions
CN113596436A (en
Inventor
陈裕发
龙祖苑
谢宗兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN202110095756.9ApriorityCriticalpatent/CN113596436B/en
Publication of CN113596436ApublicationCriticalpatent/CN113596436A/en
Application grantedgrantedCritical
Publication of CN113596436BpublicationCriticalpatent/CN113596436B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请涉及一种视频特效的检验方法、装置、计算机设备和存储介质。所述方法包括:获取从符合目标特效对应效果的标准视频中抽取的参考视频帧;所述标准视频通过对初始视频添加所述目标特效后得到;所述参考视频帧与所述标准视频中目标特效的生效范围匹配;以所述目标特效对应效果为期待效果,对所述初始视频进行特效添加处理,获得待检验的特效视频;抽取所述特效视频中与所述生效范围匹配的检验视频帧;将所述检验视频帧和所述参考视频帧进行比对得到第一比对结果,根据所述第一比对结果得到所述特效视频的特效检验结果。才能本申请的方法能够提高视频特效检验的效率和准确性。

The present application relates to a method, device, computer equipment and storage medium for testing video special effects. The method includes: obtaining a reference video frame extracted from a standard video that meets the target special effect corresponding effect; the standard video is obtained by adding the target special effect to the initial video; the reference video frame matches the effective range of the target special effect in the standard video; taking the target special effect corresponding effect as the expected effect, the initial video is processed for adding special effects to obtain a special effects video to be tested; extracting a test video frame in the special effects video that matches the effective range; comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effects video based on the first comparison result. Only in this way can the method of the present application improve the efficiency and accuracy of video special effects testing.

Description

Method, device, computer equipment and storage medium for checking video special effects
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer device, and a storage medium.
Background
With the development of image processing technology, the requirement of users for the diversity of multimedia content is gradually increased, and the users can use various special effects when editing images or videos, so that the multimedia content is more colorful, and whether the special effect function can be normally realized in practical application is a key factor for guaranteeing the user experience.
In the related art, whether the special effect function is normal is usually checked from the implementation code layer of the special effect function, but this approach is not only inefficient, but also is restricted by the technical level of the code analyst, and the accuracy of the check is difficult to ensure.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, apparatus, computer device, and storage medium for inspecting a video effect that can improve the efficiency and accuracy of inspecting a video effect.
A method of verifying video effects, the method comprising:
acquiring a reference video frame extracted from a standard video which accords with the effect corresponding to the target special effect;
the standard video is obtained by adding the target special effect to the initial video, and the reference video frame is matched with the effective range of the target special effect in the standard video;
taking the corresponding effect of the target special effect as a waiting effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;
extracting a check video frame matched with the effective range from the special effect video;
And comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.
A video special effects verification device, the device comprising:
the system comprises a reference video frame acquisition module, a target special effect acquisition module and a target special effect acquisition module, wherein the reference video frame is extracted from standard video which accords with the target special effect corresponding effect;
The special effect adding processing module is used for taking the corresponding effect of the target special effect as a waiting effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;
The test video frame extraction module is used for extracting test video frames matched with the effective range in the special effect video;
And the comparison module is used for comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining the special effect test result of the special effect video according to the first comparison result.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a reference video frame extracted from a standard video which accords with the effect corresponding to the target special effect;
the standard video is obtained by adding the target special effect to the initial video, and the reference video frame is matched with the effective range of the target special effect in the standard video;
taking the corresponding effect of the target special effect as a waiting effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;
extracting a check video frame matched with the effective range from the special effect video;
And comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a reference video frame extracted from a standard video which accords with the effect corresponding to the target special effect;
the standard video is obtained by adding the target special effect to the initial video, and the reference video frame is matched with the effective range of the target special effect in the standard video;
taking the corresponding effect of the target special effect as a waiting effect, and carrying out special effect adding processing on the initial video to obtain a special effect video to be checked;
extracting a check video frame matched with the effective range from the special effect video;
And comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.
According to the method, the device, the computer equipment and the storage medium for checking the video effect, the reference video frame extracted from the standard video which accords with the effect corresponding to the target effect is obtained, and the standard video is obtained by adding the target effect to the initial video, and the reference video frame is matched with the effective range of the target effect, so that the reference video frame can accurately represent the effect corresponding to the target effect, the terminal further takes the effect corresponding to the target effect as the effect, performs effect adding processing on the initial video to obtain the effect video to be checked, extracts the check video frame matched with the effective range in the effect video, and can accurately represent the effect corresponding to the target effect under the condition that the effect function of the terminal is normal, therefore, the check video frame and the reference video frame can be compared to obtain a first comparison result, the effect check result of the effect video is obtained according to the first comparison result, automatic check of the effect adding function is realized, compared with the automatic check function from the function realization level in the related technology, the automatic check function is not more efficient, and the method is not more restricted by normal technology personnel, and the method is more accurate.
Drawings
FIG. 1 is an application environment diagram of a method of verifying video effects in one embodiment;
FIG. 2 is a flow chart of a method of verifying video effects in one embodiment;
FIG. 3 is a flow chart illustrating the steps of generating the initial video frame in one embodiment;
FIG. 4 is a flow chart of a test for video effects in another embodiment;
FIG. 5 is a histogram of a difference map corresponding to a check video frame in one embodiment;
FIG. 6A is a schematic diagram of a base video frame in one embodiment;
FIG. 6B is a schematic diagram of an action video frame in one embodiment;
FIG. 6C is a schematic diagram of a target effect corresponding to one embodiment;
FIG. 7A is a schematic illustration of a special effect upon face recognition in one embodiment;
FIG. 7B is a schematic diagram of a special effect triggered upon detection of a blink event in one embodiment;
FIG. 7C is a schematic diagram of a difference map in one embodiment;
FIG. 8A is a diagram of an action video frame according to another embodiment;
FIG. 8 is a block diagram of a visual effects verification device in one embodiment;
Fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, perceives the environment, obtains knowledge, and uses the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" at a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, follow-up and measurement on a target, and further perform graphic processing to make the Computer process an image more suitable for human eyes to observe or transmit to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Machine learning (MACHINE LEARNING, ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The scheme provided by the embodiment of the application relates to the technologies of artificial intelligence, such as computer vision and the like, and is specifically described by the following embodiments:
The method for checking the video special effect can be applied to an application environment shown in figure 1. Wherein the terminal 102 communicates with the server 104 via a network, the terminal 102 comprises a plurality, e.g. 102a, 10b. The terminal 102 may be provided with a video client, where the server 104 is a background server corresponding to the video client, and stores an initial video and a standard video, where the standard video is a video that meets a target effect corresponding to a target effect, and the standard video is obtained by adding the target effect to the initial video. The terminal can acquire an initial video and a standard video from a server, extract a video frame matched with the effective range of a target special effect from the standard video as a reference video frame, take a corresponding effect of the target special effect as a standby effect, perform special effect adding processing on the initial video to obtain a special effect video to be tested, further extract a video matched with the effective range of the target special effect from the special effect video as a test video frame, and then the terminal can compare the test video frame with the reference video frame to obtain a first comparison result and obtain a special effect test result of the special effect video according to the first comparison result.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, and the server 104 may be implemented by a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a method for checking a video special effect is provided, and the method is applied to the terminal in fig. 1 for illustration, and includes the following steps:
Step 202, obtaining a reference video frame extracted from a standard video which accords with the effect corresponding to the target special effect, wherein the standard video is obtained by adding the target special effect to the initial video, and the reference video frame is matched with the effective range of the target special effect in the standard video.
The special effect refers to a video special effect, and the video special effect is a specific pattern or animation added in a video frame. For example, glasses are added to the video frame containing the human face, and for example, a snowing animation is added to the video frame containing the human face. In the case where the video effect addition function is normal, the video effect is typically added triggered by specific content in the video. For example, in a video frame containing a face, a video special effect may be triggered by the face. The target special effect refers to a special effect that needs to be checked for whether it can be added normally. The corresponding effect of the target effect refers to the effect on the graphic level after the target effect is normally added. It can be understood that when the video effect is a pattern, the target effect corresponding effect is an image effect presented by the pattern, and when the video effect is an animation, the target effect corresponding effect is an animation effect presented by the animation.
The standard video is a video with the effect added according with the corresponding effect of the target effect. Since the special effect adding effect of the standard video accords with the corresponding effect of the target special effect, the special effect adding function of the terminal can be checked by taking the standard video as a reference. The standard video is obtained by adding target special effects to the initial video. The adding of the target special effect to the initial video can be that a technician manually adds the target special effect to the initial video, or the video application program automatically adds the target special effect after reading the initial video, at this time, in order to ensure the accuracy of the inspection, the automatically added special effect can be confirmed manually, so that the special effect adding effect of the standard video is ensured to be in accordance with the corresponding effect of the target special effect. The initial video here refers to an original video to which no special effects are added, and typically, the initial video contains specific contents capable of triggering target special effects.
In one embodiment, the initial video may be a video gathered from the internet that contains specific content that can trigger the target special effects. For example, if the specific content that can trigger the target special effect is a face, the initial video is a video containing the face.
It can be understood that when there are multiple target special effects, multiple special effects can be added in the initial video to obtain a standard video, or each target special effect can be added in the initial video to obtain a standard video of each target special effect, and the standard video only contains one target special effect.
The reference video frame matches the effective range of the target special effect in the standard video. The effective range of the target special effect can be the effective time range of the target special effect or the serial number range of the effective video frame of the target special effect. For example, the effective range of a certain target effect may be 3.0 seconds to 3.2 seconds, and for another example, the effective range of a certain target effect may be 90 th to 105 th frames in the video. Correspondingly, the matching of the reference video frame and the effective range of the target special effect means that the time of the reference video frame is within the effective time range of the target special effect or the video frame sequence number of the reference video frame is within the sequence number range of the effective video frame of the target special effect. For example, in the above example, when the effective range of the target special effect may be 3.0 seconds to 3.2 seconds, the time of the reference video frame needs to be within the period of 3.0 seconds to 3.2 seconds, and when the effective range of the target special effect is 90 th to 105 th frames, the video frame number of the reference video frame needs to be between 90 th and 105 th frames.
Specifically, a video client is set on the terminal, and the video client may be a web page client or an APP (Application) client. The video client has the function of special effect addition. And the terminal checks the function of adding special effects to the video client according to a preset period. When the test is performed, the terminal extracts a reference video frame from the standard video as the basis of the special effect test.
In one embodiment, before the first inspection, the terminal acquires the initial video and the standard video from the server, stores the initial video and the standard video locally, and after each inspection, the terminal can acquire the initial video and the standard video locally, and extract a video frame matched with the effective range of the target special effect from the standard video as a reference video frame. In other embodiments, the terminal may store the reference video frame locally when the reference video frame is first extracted, so that the terminal may directly obtain the reference video frame from the local in the later test, thereby improving efficiency of the special effect test.
It will be appreciated that in other embodiments, in order to save local storage space, the terminal may not save the initial video and the standard video locally, and send a request to the server to obtain the initial video and the standard video during each special effect check, and extract, from the standard video, a video frame matching the effective range of the target special effect as the reference video frame.
And 204, taking the target special effect corresponding effect as a waiting effect, and carrying out special effect adding processing on the initial video to obtain the special effect video to be checked.
Specifically, the terminal performs special effect adding processing on the initial video, and in the process of the special effect adding processing, the special effect video to be checked is obtained by taking the corresponding effect of the target special effect as a waiting effect, the special effect video at the moment is the video after the special effect adding processing, whether the target special effect is unknown is normally added in the special effect video, and the checking is performed according to the reference video extracted from the standard video.
In one embodiment, the terminal may detect the video content of the initial video with the effect corresponding to the target special effect as the expected effect, and perform special effect adding processing on the initial video when the trigger content corresponding to the target special effect is detected, so as to obtain the special effect video to be checked.
And step 206, extracting the check video frames matched with the effective range in the special effect video.
Specifically, when the terminal performs special effect inspection, the reference video frame is taken as a basis, the reference video frame is obtained from a standard video which accords with the effect corresponding to the target special effect and is matched with the effective time of the reference video, so that the reference video frame can accurately present the special effect corresponding to the target special effect, and for the terminal, under the normal condition of the special effect adding function, the special effect of the target special effect can be presented in the effective range of the target special effect of the special effect to be inspected by the terminal, so that the terminal can extract the video frame matched with the effective range of the target special effect from the special effect video to be inspected as an inspection video frame, and whether the special effect adding function of the terminal is normal is judged by judging whether the inspection video frame normally adds the target special effect.
In one embodiment, the terminal may first determine an effective time range of the target special effect, extract a video frame with a time range within the effective time range from the special effect video to be checked as a video frame matched with the effective range of the target special effect, and obtain a check video frame. For example, assuming that the effective time range of the target special effect is 3.0S-3.2S, one frame may be taken as a check video frame from the time period of 3.0S-3.2S in the special effect video to be checked. It can be appreciated that, in implementation, to ensure accuracy of subsequent comparison, the terminal may acquire the exact time of the reference video frame, and acquire, as the inspection video frame, a video frame from the special effect video to be inspected, which is identical to the exact time of the reference video frame.
In another embodiment, the terminal may first determine a sequence number range of an effective video frame of the target special effect, extract a video frame with a sequence number range of the video frame in the sequence number range from the special effect video to be checked as a video frame matched with the effective range of the target special effect, and obtain the checked video frame. For example, assuming that the effective range of the target special effect is 90 th to 105 th frames, any one of the 90 th to 105 th frames of the special effect video to be checked may be taken as the check video frame. It can be appreciated that, in the implementation, to ensure the accuracy of the subsequent comparison, the terminal may acquire the exact sequence number of the reference video frame, and acquire, from the special effect video to be inspected, a video frame identical to the exact sequence number of the reference video frame as the inspection video frame.
And step 208, comparing the test video frame with the reference video frame to obtain a first comparison result, and obtaining a special effect test result of the special effect video according to the first comparison result.
Specifically, since the standard video and the special effect video to be tested are obtained by adding the same special effect to the same initial video, and the special effect adding effect of the standard video accords with the corresponding effect of the target special effect, under the condition that the special effect function of the terminal is added normally, the test video frame obtained by extracting the effective time of the target special effect from the special effect video to be tested is similar to the special effect presented by the reference video obtained by extracting the effective time of the target special effect from the standard video, the terminal can compare the test video frame with the reference video frame, and the special effect test result of the special effect video to be tested is obtained according to the obtained comparison result, wherein the special effect test result comprises two types that the special effect adding of the terminal is abnormal or the special effect adding of the terminal is normal.
In one embodiment, the terminal may compare the similarity between the inspection video frame and the reference video frame when comparing the inspection video frame and the reference video frame. In other embodiments, the terminal may also compare the difference between the inspection video frame and the reference video frame when comparing the inspection video frame and the reference video frame.
In one embodiment, the terminal may obtain a result identifier for characterizing the effect test result based on the first comparison result. For example, a normal effect test result is added to the effect of the terminal by using a '1', and an abnormal effect test result is added to the effect of the terminal by using a '0'.
In one embodiment, when the effect test result is that the effect addition is abnormal, the terminal may generate alarm information, send the alarm information to the server, and the server may notify the relevant technician to improve the effect addition function or further perform manual test.
In one embodiment, when the effect test result is that the effect addition is normal, it is indicated that the effect video to be tested also meets the effect corresponding to the target effect, and then the terminal can store the effect video in the test process as the standard video in the next test.
According to the method for checking the video effect, the reference video frame extracted from the standard video which accords with the target effect is obtained by adding the target effect to the initial video, and the reference video frame is matched with the effective range of the target effect, so that the reference video frame can accurately show the effect corresponding to the target effect, the terminal further takes the effect corresponding to the target effect as the expected effect, the initial video is subjected to effect adding processing to obtain the effect video to be checked, the check video frame matched with the effective range in the effect video is extracted, and under the condition that the effect function of the terminal is normal, the extracted check video frame can accurately show the effect corresponding to the target effect, therefore, the check video frame and the reference video frame can be compared to obtain a first comparison result, and the effect check result of the effect video is obtained according to the first comparison result, thereby realizing the automatic check of the effect adding function.
In one embodiment, taking the effect of the target special effect as the effect to be expected, carrying out special effect adding processing on the initial video to obtain the special effect video to be checked, wherein the step of taking the effect corresponding to the target special effect as the effect to be expected, detecting the video content of the initial video, and carrying out special effect adding processing on the initial video when the triggering content corresponding to the target special effect is detected to obtain the special effect video to be checked.
The triggering content corresponding to the target special effect refers to video content for triggering and adding the target special effect. The triggering content corresponding to the target special effect comprises at least one of a target object and a target action corresponding to the target object. The target object may be a living body or an object such as a natural person, an animal, a vehicle, a virtual character, or the like, or may be a specific part such as a face, a hand, or the like. For example, the target special effect is realized by adding glasses for the face in a video frame containing the face, and the face is a target object at the moment. The target motion corresponding to the target object refers to a motion formed by the movement of the main body corresponding to the target object. The target motion may be a motion performed by an independent living body, such as a five sense organ motion or a limb motion of a human body. The target action may also be a movement of an object, for example, the target action may be a travel of a vehicle. It will be appreciated that for different target effects, the target objects may be the same or different.
Specifically, the terminal takes the effect corresponding to the target special effect as a standby effect, detects the video frames of the initial video frame by frame to detect the video content of the initial video, and performs special effect adding processing on the initial video when the triggering content corresponding to the target special effect is detected to obtain the special effect video to be detected. In the detection process, the terminal can input video frames in the initial video into a detection model corresponding to the trigger content corresponding to the target special effect frame by frame to obtain a detection result. The detection model herein refers to a machine learning model that can detect video frames. It will be appreciated that the detection model is different for different trigger content. For example, the detection model may be an object detection model when the trigger content is a target object, and a target motion detection model when the trigger content is a target motion.
In a specific embodiment, taking a target object as a face as an example, a general machine learning model capable of being used for face recognition can be obtained as a detection model to perform face recognition on video frames in an initial video frame by frame, or a face image of a marked face area can be obtained as a training sample to perform supervised machine learning training to obtain a detection model capable of being used for face recognition. During the training process, random gradient descent algorithm, adagrad ((ADAPTIVE GRADIENT, adaptive gradient) algorithm, adadelta (AdaGrad) algorithm improvement), RMSprop (AdaGrad) algorithm improvement, adam (Adaptive Moment Estimation ) algorithm, etc. may be used to adjust the network parameters of the detection model.
In the above embodiment, the terminal uses the effect corresponding to the target special effect as the expected effect, detects the video content of the initial video, and performs special effect adding processing on the initial video when the trigger content corresponding to the target special effect is detected, so as to obtain the special effect video to be checked, and then the target special effect is triggered by the trigger content under the condition that the video special effect adding function of the terminal is normal, so that the effective range of the target special effect in the standard video can be determined according to the occurrence time of the trigger content.
In one embodiment, the triggering content corresponding to the target special effect comprises a target object, and the step of determining the effective time of the target special effect comprises the steps of obtaining the appearance time period of the target object and determining the appearance time period of the target object as the effective range of the target special effect.
Specifically, when the triggering content corresponding to the target special effect includes the target object, the target special effect is triggered when the target object appears, and after the target object disappears, the target special effect also disappears, so that the terminal can acquire the appearance period of the target object, and the appearance period of the target object is determined as the effective range of the target special effect.
In one embodiment, after the terminal acquires the initial video, the terminal may detect the video content of the initial video to determine the appearance period of the target object in the initial video, and determine the appearance period of the target object as the effective range of the target special effect. For example, if the terminal detects the video content of the initial video and determines that the appearance period of the target object in the initial video is 3.0 seconds to 3.2 seconds, 3.0 seconds to 3.2 seconds can be determined as the effective range of the target special effect.
In other embodiments, after the terminal obtains the initial video, the terminal may further detect the video content of the initial video to determine a video frame sequence number of a video frame in which the target object appears in the initial video, and determine a video frame sequence number range formed by the video frame sequence numbers of the video frames in which the target object appears as an effective range of the target special effect. For example, after the terminal detects the video content of the initial video, it determines that the video frame in which the target object appears in the initial video is 90 th to 120 th frames, and then it may determine that the effective range of the target special effect is 90 th to 120 th frames.
In one embodiment, the triggering content corresponding to the target special effect comprises a target action corresponding to the target object, and the step of determining the effective range of the target special effect comprises the steps of obtaining the appearance period of the target action and determining a period of preset time after the appearance period of the target action as the effective range of the target special effect.
Specifically, when the trigger content corresponding to the target special effect includes the target action corresponding to the target object, the target special effect is triggered after the target action is completed, and the target special effect lasts for a period of time of a preset duration, so that the terminal can acquire the occurrence period of the target action, and after the occurrence period of the target action is acquired, a period of preset time after the occurrence period of the target action is determined as the effective range of the target special effect. The duration of a preset time after the occurrence period of the target action is determined according to the preset duration of the target special effect. It will be appreciated that to ensure accuracy in the determination of the effective range, the shorter the time interval between the time of the effective range and the occurrence of the target action, the better.
In one embodiment, after the terminal acquires the initial video, the terminal may detect the video content of the initial video to determine an occurrence period of the target action in the initial video, and determine a preset period of time after the occurrence period of the target action as an effective range of the target special effect. For example, if the terminal detects the video content of the initial video and determines that the occurrence period of the target action in the initial video is 3.0 seconds to 3.2 seconds, 3.3 seconds to 3.4 seconds can be determined as the effective range of the target special effect.
In one embodiment, the target action comprises a target five-element action of a target object, the obtaining of the appearance period of the target action comprises obtaining the appearance period of the target five-element action, determining a preset time period after the appearance period of the target action as an effective range of the target special effect comprises determining an end unit time corresponding to the target five-element action according to the appearance period of the target five-element action, obtaining a preset time length of the target special effect, and determining a preset time period which takes the next unit time of the end unit time corresponding to the target five-element action as a starting time and is matched with the preset time length as the effective range of the target special effect.
The target object is a human face, and the five sense organs of the target object include blinking, opening mouth, shaking head, nodding head, lifting head and the like. The target five-sense organ action of the target object refers to a five-sense organ action capable of triggering a target special effect among the five-sense organ actions of the target object. For example, the target special effect is that an loving animation occurs when a blinking motion is detected, and for the target special effect, the target five-sense organ motion is blinking. The preset duration of the target special effect refers to the preset effective duration of the target special effect. The unit time may be preset, for example, may be 0.1 seconds. The last unit time refers to the last unit time in the appearance period of the target five sense organs action. For example, the appearance period of the target five sense organs action is 3.0 seconds to 3.2 seconds, and when the unit time is 0.1 seconds, the end unit time of the target five sense organs action is 3.2 seconds.
Specifically, after the terminal acquires the initial video, the terminal may detect the video content of the initial video to determine an appearance period of the target five-element action in the initial video, determine a last unit time in the appearance period of the target five-element action as an end unit time corresponding to the target five-element action, further acquire a preset duration of the target special effect, then use a next unit time of the end unit time as a start time of an effective range of the target special effect, determine an end time of the effective range of the target special effect according to a sum value of the start time and the preset duration, and determine a period from the start time of the effective range to the end time of the effective range as the effective range of the target special effect.
For example, assuming that the target five sense organ action is a blinking action, after detecting the video content of the initial video, the terminal determines that the occurrence period of the blinking action in the initial video is 3.0 seconds to 3.2 seconds, that is, when the target object completes the blinking action at 3.2 seconds, then 3.2 seconds is the end time of the blinking action, assuming that the unit time is 0.1 seconds, the preset time length of the target special effect is 0.2 seconds, and then 3.3 seconds to 3.5 seconds can be determined as the effective range of the target special effect.
In the above embodiment, the terminal determines the end unit time corresponding to the target five-element action according to the appearance period of the target five-element action by acquiring the appearance period of the target five-element action, and acquires the preset time of the target special effect, and determines a preset time with the next unit time of the end unit time corresponding to the target five-element action as the starting time and the time matched with the preset time as the effective range of the target special effect.
In one embodiment, the triggering content corresponding to the target special effect comprises a target limb action corresponding to the target object, the determining step of the effective range of the target special effect comprises the steps of obtaining the occurrence period of the target limb action and obtaining the video frame rate of an initial video, determining the frame number of an end video frame of the target limb action according to the occurrence period of the target limb action and the video frame rate of the initial video frame, determining the frame number of a next video frame according to the frame number of the end video frame of the target limb action to obtain the initial frame number of the target special effect, obtaining the preset duration of the target special effect, determining the video frame number of the target special effect according to the preset duration and the video frame rate of the initial video frame, and determining the video frame number range of the target special effect according to the initial frame number of the target special effect and the video frame number of the target special effect, wherein the video frame number range of the target special effect is used as the effective range of the target special effect.
The limb movements of the target object refer to limb movements of a person, such as lifting a hand, making a fist, kicking a leg, and the like. The target limb motion refers to a limb motion of a person, which can trigger a target special effect, for example, if a certain target special effect is a star which flicks when a hand-lifting motion is detected, the hand-lifting motion is the target limb motion of the target special effect. The video frame rate of the initial video is known, the terminal may directly acquire, for example, the video frame rate may be 30 frames/second, according to the video frame rate of the initial video, the terminal may calculate the video frame sequence number at any time in the initial video according to the following formula (1):
Wherein,For the video frame number, t is the time instant in the original video,Is the video frame rate of the original video.
For example, assuming that the video frame rate may be 30 frames/second, the 2.2 th second video frame number in the initial video is 2.2x30=66.
It can be understood that, referring to the above formula (1), the calculation may also calculate the number of video frames in any period of time in the initial video, where t in the above formula (1) is the duration corresponding to the period of time when calculating the number of video frames in any period of time. For example, assuming that the video frame rate may be 30 frames/second, the number of video frames in the initial video for a period of 10 seconds is 10×30=300.
Specifically, after the terminal obtains the initial video, the terminal may detect the video content of the initial video to determine the occurrence period of the target limb action in the initial video, further obtain the video frame rate of the initial video, determine the end moment of the occurrence of the target limb action according to the occurrence period of the target limb action, where the video frame corresponding to the end moment is the end video frame of the target limb action, then calculate the video frame sequence number of the video frame corresponding to the end moment according to the above formula (1), obtain the frame sequence number of the end video frame of the target limb action, add 1 to the frame sequence number of the end video frame of the target limb action to obtain the frame sequence number of the next video frame corresponding to the end video frame, and use the frame sequence number as the start frame sequence number of the target special effect. The initial frame number here is the frame number of the first video frame corresponding to the target effect.
After obtaining the initial frame number of the target special effect, the terminal can obtain the preset duration of the target special effect, and calculate the number of video frames corresponding to the preset duration by referring to the formula (1) to obtain the number of video frames of the target special effect.
And when the starting frame number of the target special effect and the video frame number of the target special effect are obtained, the terminal can determine the end frame number of the target special effect, determine the video frame number range of the target special effect according to the starting frame number and the end frame number, and take the video frame number range as the effective range of the target special effect. Wherein, the last frame number of the target special effect refers to the frame number of the last video frame corresponding to the target special effect.
For example, assuming that the appearance period of the target limb movement is 3.0 seconds to 3.2 seconds, the video frame rate of the initial video is 30 frames per second, the frame number of the last video frame of the target limb movement is 66 according to the appearance period of the target limb movement and the video frame rate of the initial video frame, the frame number of the next video frame is determined according to the frame number of the last video frame of the target limb movement, the initial frame number of the target special effect is 67, the preset duration of the target special effect is 0.2 seconds, the number of the video frames of the target special effect is 6 frames according to the preset duration and the video frame rate of the initial video frame, and finally the video frame number range 67-72 of the target special effect is determined according to the initial frame number of the target special effect and the video frame number of the target special effect.
In the above embodiment, the terminal determines the frame number of the last video frame of the target limb action according to the occurrence period of the target limb action and the video frame rate of the initial video frame, determines the next video frame after the completion of the target limb action as the start video frame of the target special effect, obtains the start frame number of the target special effect, and determines the number of the video frames of the target special effect according to the preset duration of the target special effect and the video frame rate of the initial video frame, thereby determining the video frame number range of the target special effect, and taking the video frame number range as the effective range of the target special effect.
In one embodiment, the trigger content corresponding to the target special effect comprises a target action corresponding to the target object, and the generating of the initial video comprises the steps of acquiring a basic video frame containing the target object, acquiring an action video frame corresponding to the target action, executing the target action by the target object in the action video frame, and generating the initial video according to the basic video frame and the action video frame.
The basic video frame is a video frame containing a target object. In the action video frame, the target object performs the target action, and then the action video frame is a video frame whose content performs the target action for the target object.
In this embodiment, the trigger content corresponding to the target special effect includes a target action corresponding to the target object, and then the generated video content of the initial video needs to include the target action, and then the terminal may acquire a base video frame, generate a plurality of video frames identical to the base video frame according to the base video frame, acquire an action video frame corresponding to the target action, generate a plurality of video frames identical to the action video frame according to the action video frame, combine the continuous plurality of action video frames to obtain the target action, and combine the target action with other base frames to generate the initial video.
In the above embodiment, the terminal obtains the action video frame corresponding to the target action by obtaining the base video frame containing the target object, and generates the initial video according to the base video frame and the action video frame, thereby realizing automatic generation of the initial video and improving the generation efficiency of the initial video.
In one embodiment, as shown in FIG. 3, generating an initial video frame from a base video frame and an action video frame includes:
Step 302, obtaining a video frame rate and a preset duration of an initial video, and determining a total frame number of the initial video according to the video frame rate and the preset duration of the initial video frame.
Specifically, after the terminal obtains the video frame rate and the preset duration of the initial video, the terminal may calculate the number of video frames of the initial video, that is, the total number of frames of the initial video, with reference to the above formula (1).
Step 304, obtaining the appearance period of the target action, and determining the first target frame number of the action video frame according to the appearance period of the target action and the video frame rate.
Specifically, after the terminal obtains the occurrence period of the target action, the time of the start video frame and the time of the end video frame of the target action can be determined, then the frame number of the start video frame and the frame number of the end video frame of the target action can be determined by referring to the formula (1), and further the number required by the combination of the action video frames into the target action can be obtained, and the first target frame number is obtained.
Step 306, obtaining the difference between the total frame number of the initial video and the first target frame number of the motion video frame, and obtaining the second target frame number of the base video frame.
Specifically, the video frames except the motion video frames in the initial video can be base video frames, so that the terminal can obtain the difference between the total frame number of the initial video and the target video frame number of the motion video frames to obtain the second target frame number of the base video frames.
Step 308, generating a video frame of a first target frame number as a video frame within the occurrence period of the target action from the action video frame, and generating a video frame of a second target frame number as a video frame outside the occurrence period of the target action from the base video frame, so as to generate an initial video frame.
Specifically, the terminal may generate, according to the motion video frames, video frames having the same number as the first target frame number and the same content as the motion video frames, use the video frames as video frames within an occurrence period of the target motion, generate, according to the base video frames, video frames having the same number as the second target frame number and the same content as the base video frames, and use the video frames as video frames outside the occurrence period of the target motion, thereby combining the motion video frames of the first target frame number and the base video frames of the second target frame number to obtain an initial video, and obtain the initial video, where the target motion occurs in the occurrence period of the target motion.
In the above embodiment, by generating the initial video with the action video frame as the video frame in the appearance period of the target action and the base video frame as the video frame outside the appearance period of the target action, an arbitrary target action can be generated at an arbitrary time, with a high degree of freedom.
In one embodiment, as shown in fig. 4, a method for verifying video effects is provided, comprising the steps of:
step 402, obtaining a reference video frame extracted from a standard video which accords with the effect corresponding to the target special effect, wherein the standard video is obtained by adding the target special effect to the initial video, and the reference video frame is matched with the effective range of the target special effect in the standard video.
And step 404, taking the target special effect corresponding effect as a waiting effect, and carrying out special effect adding processing on the initial video to obtain the special effect video to be checked.
And step 406, extracting the check video frames matched with the effective range in the special effect video.
In step 408, the check video frame and the reference video frame are compared to obtain a first comparison result.
The first comparison result obtained in step 402-step 408 can check whether the effect of the target special effect in the special effect video to be checked is normal or not by performing special effect adding processing on the initial video by the terminal, but it cannot be guaranteed whether the adding time of the target special effect in the special effect video is normal or not, the terminal may have added the target special effect before the effective range of the target special effect, at this time, the special effect adding function of the terminal is abnormal, in order to consider the situation, the terminal further performs the following step 410-step 414 to obtain a second comparison result, and checks the adding time of the special effect in the special effect video according to the second comparison result.
Step 410, obtaining a comparison video frame extracted from the standard video, wherein the comparison video frame is matched with a comparison range of a target special effect in the standard video, and the comparison range of the target special effect is before an appearance range of the trigger content in the initial video.
The comparison range of the target special effect can be a comparison time range or a sequence number range of a comparison video frame. For example, the contrast range of a certain target effect may be 2.7 seconds to 2.9 seconds, and for another example, the contrast range of a certain target effect may be 85 th frame to 90 th frame in the video. Correspondingly, the matching of the contrast video frame and the contrast range of the target special effect means that the time of the contrast video frame is within the contrast time range of the target special effect or the video frame sequence number of the contrast video frame is within the sequence number range of the contrast video frame of the target special effect. For example, in the above example, when the contrast range of the target special effect may be 2.7 seconds to 2.9 seconds, the time of the contrast video frame needs to be in the period of 2.7 seconds to 2.9 seconds, and when the contrast range of the target special effect is 85 th to 90 th frames, the video frame number of the reference video frame needs to be between 85 th to 90 th frames.
The triggering content comprises at least one of a target object and a target action corresponding to the target object. The trigger content may occur within a time range or video frame number range in which the target object exists in one embodiment, and within a time range or video frame number range in which the target action occurs in other embodiments.
The comparison range of the target special effects may be a period of time before the appearance range of the trigger content in the initial video, for example, the appearance time range of the trigger content may be 3.0 seconds to 3.2 seconds, the comparison range of the target special effects may be 2.8 seconds to 2.9 seconds, and in another embodiment, the comparison range of the target special effects may be a sequence number range before the video frame sequence number range of the trigger content in the initial video, where the sequence number range before the video frame sequence number range of the trigger content refers to that the maximum value of the sequence number range is smaller than the minimum value of the video frame sequence number range of the trigger content, for example, the comparison range of the target special effects may be 90 th to 95 th frames, assuming that the video frame sequence number range of the trigger content is 100 th to 120 th frame.
The comparison video frame is matched with the comparison range of the target special effect in the standard video, and the comparison range of the target special effect is before the appearance range of the trigger content in the initial video, so that the comparison video frame is a video frame before the target special effect appears in the standard video, the target special effect is not added in the video frame, the comparison video frame is used as a basis to check whether the terminal adds the target special effect in the time without the addition of the target special effect, and the accuracy of the addition time of the target special effect is ensured.
And step 412, extracting the check video frame matched with the comparison range in the special effect video.
Specifically, under the condition that the special effect adding function is normal, the special effect video to be tested, which is obtained by carrying out special effect processing on the initial video by the terminal, can show the special effect of the target special effect in the effective range of the target special effect, but can not show the special effect of the target special effect in the comparison range of the target special effect, so that the terminal can also extract a video frame matched with the comparison range of the target special effect from the special effect video to be tested, serve as a comparison video frame, and judge whether the special effect adding function of the terminal is normal by judging whether the comparison video frame is not added with the target special effect.
In one embodiment, the terminal may first determine a comparison time range of the target special effect, extract a video frame with a time range within the comparison time range from the special effect video to be tested as a video frame matched with the comparison range of the target special effect, and obtain a test video frame matched with the comparison time. For example, assuming that the contrast time range of the target special effect is 2.6 seconds to 2.7 seconds, one frame may be taken as a check video frame from the time period of 2.6 seconds to 2.7 seconds in the special effect video to be checked. It can be appreciated that, in implementation, to ensure accuracy of subsequent comparison, the terminal may acquire an exact time of the comparison video frame, and acquire, from the special effect video to be checked, a video frame identical to the exact time of the comparison video frame as a check video frame matched with the comparison time.
In another embodiment, the terminal may first determine a sequence number range of a reference video frame of the target special effect, extract a video frame with a sequence number range of the video frame in the sequence number range from the special effect video to be tested as a video frame matched with the reference range of the target special effect, and obtain a test video frame matched with the reference time. For example, assuming that the collation range of the target special effects is 80 th to 85 th frames, any one frame from 80 th to 85 th frames of the special effects video to be checked may be taken as the check video frame. It can be appreciated that, in the implementation, to ensure the accuracy of the subsequent comparison, the terminal may acquire the exact sequence number of the comparison video frame, and acquire, from the special effect video to be checked, the video frame identical to the exact sequence number of the comparison video frame as the check video frame matched with the comparison time.
And step 414, comparing the check video frame matched with the comparison range with the comparison video frame to obtain a second comparison result.
And step 416, obtaining a special effect checking result of the special effect video according to the first comparison result and the second comparison result.
Specifically, since the standard video and the special effect video to be checked are obtained by adding the same special effect to the same initial video, and the special effect adding effect of the standard video accords with the corresponding effect of the target special effect, under the condition that the special effect function of the terminal is added normally, the check video frames obtained by extracting the target special effect from the special effect video to be checked in the comparison time of the target special effect are not added as the reference video obtained by extracting the target special effect from the standard video in the comparison time, and the terminal can compare the check video frames with the reference video frames, and obtain the special effect checking result of the special effect video to be checked according to the obtained comparison result.
In one embodiment, the terminal may compare the similarity between the inspection video frame and the reference video frame when comparing the inspection video frame and the reference video frame. In other embodiments, the terminal may also compare the difference between the inspection video frame and the reference video frame when comparing the inspection video frame and the reference video frame.
And finally, the terminal combines the first comparison result and the second comparison result to determine the special effect checking result of the special effect video, so that the special effect added by the special effect adding function of the terminal accords with the corresponding effect of the target special effect, and the adding time accords with the adding time corresponding to the target special effect, namely the adding effect is accurate and the adding time is accurate.
In the above embodiment, the terminal further extracts the check video frame matched with the check time in the special effect video by acquiring the check video frame extracted from the standard video, wherein the check video frame is matched with the check range of the target special effect in the standard video, the check range of the target special effect is the range before the trigger range of the trigger content in the special effect video, the check video frame matched with the check time is compared with the check video frame to obtain a second comparison result, and finally the special effect check result of the special effect video is obtained by combining the first comparison result and the second comparison result, so that the effect accuracy and the time accuracy of the special effect addition can be checked at the same time, and the accuracy of the special effect check is further improved.
In one embodiment, the initial video contains trigger content corresponding to a target effect, the trigger content corresponding to the target effect comprises target actions corresponding to the target object, before the comparison video frame extracted from the standard video is acquired, the method further comprises the steps of acquiring the occurrence period of the target action, and determining a period of preset time before the occurrence period of the target action as the comparison range of the target effect.
In this embodiment, the initial video includes trigger content corresponding to the target special effect, where the trigger content corresponding to the target special effect includes a target action corresponding to the target object, then the standard video of this embodiment takes effect after an occurrence period of the target action, and a video frame before the occurrence period of the target action is a special effect that does not exhibit the target special effect, then, for the terminal, in a case that the special effect adding function is normal, it does not exhibit the special effect of the target special effect in a time period in which the special effect of the target special effect is not exhibited in the standard video, then the terminal may acquire the occurrence period of the target action, and determine a preset time period before the occurrence period of the target action as a comparison range of the target special effect.
It will be appreciated that, in a video frame preceding the appearance period of the target action, the greater the time interval between the time of the video frame and the appearance period of the target action, the less likely that the video frame is erroneously added with the target effect, then a preset time interval as small as possible with the appearance period of the target action may be selected when determining the comparison range. In a specific embodiment, a start time of the occurrence period of the target action may be acquired, a last time of the start time is determined as an end time of the comparison range, and a preset time ending with the end time is determined as the comparison range. For example, assuming that the appearance period of the target action is 3.0 seconds to 3.2 seconds, the end time of the collation range of the target effect is 2.9 seconds, and 2.8-2.9 seconds can be determined as the collation range.
In other embodiments, after acquiring the occurrence period of the target action, the terminal may calculate the video frame number of the start video frame of the target action with reference to the formula (1) in the above embodiment, determine the last sequence number of the video frame number as the end video frame number of the comparison range of the target special effect, and determine the sequence number range ending with the end video frame number as the comparison range. For example, assuming that the appearance period of the target action is 3.0 seconds to 3.2 seconds and the video frame rate is 30 frames/second, it is possible to calculate that the video frame number of the start video frame of the target action is 3×30=90, then the end video frame number of the contrast range of the target effect is 89, and then the 86 th frame to 89 th frame can be determined as the contrast range of the target effect.
In the above embodiment, the terminal determines a preset time period before the occurrence period of the target action as the comparison range of the target special effect by acquiring the occurrence period of the target action, so that the comparison range of the target special effect can be accurately and rapidly determined.
In one embodiment, comparing the test video frame with the reference video frame to obtain a first comparison result comprises calculating the similarity between the test video frame and the reference video frame, taking the calculated similarity as the comparison result of the test video frame and the reference video frame to obtain a first comparison result, and obtaining the special effect test result of the special effect video according to the first comparison result comprises determining the special effect test result of the special effect video according to the magnitude relation between the similarity and a preset similarity threshold.
Wherein the similarity is used to characterize the degree of similarity between the test video frame and the reference video frame. The greater the similarity, the greater the degree of similarity between the test video frame and the reference video frame, and the greater the likelihood that the test video frame will exhibit the same effect as the reference video frame.
Specifically, when the terminal compares the test video frame with the reference video frame, the terminal can calculate the similarity between the test video frame and the reference video frame, take the calculated similarity as the comparison result of the test video frame and the reference video frame, then compare the magnitude relation between the obtained similarity and the preset similarity threshold, and determine the special effect test result of the special effect video according to the magnitude relation between the similarity and the preset similarity threshold. The preset similarity threshold value can be preset according to experience.
In one embodiment, the special effect test result of the special effect video can be determined to be normal for special effect addition when the similarity between the test video frame and the reference video frame exceeds a preset similarity threshold, and otherwise, the special effect test result of the special effect video can be determined to be abnormal for special effect addition when the similarity between the test video frame and the reference video frame does not exceed the preset similarity threshold.
In another embodiment, considering that the special effect adding is normal, including not only the special effect adding effect, but also the special effect adding time is normal, the terminal can also acquire a comparison video frame extracted from the standard video, wherein the comparison video frame is matched with a comparison range of a target special effect in the standard video, the comparison range of the target special effect is before the occurrence range of the trigger content in the initial video, a check video frame matched with the comparison range in the special effect video is further extracted, the similarity between the check video frame matched with the comparison range and the comparison video frame is calculated, and the special effect check result of the special effect video is determined according to the size relation between the similarity and a preset similarity threshold value and the size relation between the similarity between the check video frame matched with the effective time and a reference video frame and the preset similarity threshold value.
In a specific embodiment, the special effect addition of the special effect video is determined to be normal when the similarity between the check video frame matched with the check range and the check video frame and the similarity between the check video frame matched with the effective time and the reference video frame exceed a preset similarity threshold, and the special effect addition of the special effect video is determined to be abnormal when any one of the similarity between the check video frame matched with the check range and the check video frame matched with the effective time and the similarity between the check video frame and the reference video frame does not exceed the preset similarity threshold.
In one embodiment, calculating the similarity between the test video frame and the reference video frame comprises obtaining a target initial frame corresponding to a target special effect, updating pixel values of the test video frame through pixel difference values of the test video frame and the target initial frame, eliminating content similar to the target initial frame in the test video frame according to the magnitude relation between the updated pixel values of the test video frame and a preset pixel difference threshold value to obtain a difference image corresponding to the test video frame, updating pixel values of the reference video frame through pixel difference values of the reference video frame and the target initial frame, eliminating content similar to the target initial frame in the reference video frame according to the magnitude relation between the updated pixel values of the reference video frame and the preset pixel difference threshold value to obtain a difference image corresponding to the reference video frame, and determining the similarity between the test video frame and the reference video frame based on the similarity between the difference image corresponding to the test video frame and the difference image corresponding to the reference video frame.
The target initial frame corresponding to the target special effect refers to a video frame with similarity of other contents beyond a preset threshold value except the target special effect, wherein the video frame is added with the target special effect. In a specific embodiment, the target initial frame corresponding to the target special effect refers to a video frame which is identical to the video frame added with the target special effect except for the target special effect.
Specifically, the terminal may obtain a target initial frame corresponding to the target special effect from the video frames of the initial video, calculate a pixel difference value between the test video frame and the target initial frame, which is matched with the effective range, update the pixel value of the test video frame by using the calculated pixel difference value, where the obtained pixel value in the test video frame is an updated pixel value, and the updated pixel value is obtained by using the pixel difference value, so that the pixel difference between the test video frame and the target initial frame can be reflected, so that the updated pixel value can be compared with a preset pixel difference threshold, and when a certain updated pixel value exceeds the preset pixel difference threshold, it is indicated that the pixel difference between the pixel and the target initial frame is larger, the pixel can be reserved, otherwise, when a certain updated pixel value does not exceed the preset pixel difference threshold, it is indicated that the pixel is relatively similar to the pixel of the target initial frame, and then the content similar to the target initial frame in the test video frame can be eliminated by the pixel, so as to obtain a difference image corresponding to the test video frame.
In a specific embodiment, the difference map of the inspection video frame may be obtained by performing the elimination processing on the content similar to the target initial frame in the inspection video frame according to the following formula (2):
Wherein,The (R, G, B) pixel value representing the (x, y) th coordinate in the difference map,To check the R (Red) pixel value of the (x, y) th coordinate of the video frame,R pixel values for the (x, y) th coordinate of the target initial frame,To check the G (Rreen) pixel value of the (x, y) th coordinate of the video frame,G pixel values for the (x, y) th coordinate of the target initial frame,To check the B pixel value of the (x, y) th coordinate of the video frame,The B (Blue) pixel value at the (x, y) th coordinate of the target initial frame, L is a pixel difference threshold, and may be empirically set, for example, l=20 may be set.
Further, the terminal can calculate the pixel difference value between the reference video frame and the target initial frame, update the pixel value of the reference video frame by using the calculated pixel difference value, the obtained pixel value in the reference video frame is an updated pixel value, the updated pixel value is obtained by the pixel difference value between the reference video frame and the target initial frame, so that the pixel difference between the reference video frame and the target initial frame can be reflected, the updated pixel value can be compared with a preset pixel difference threshold, when a certain updated pixel value exceeds the preset pixel difference threshold, the pixel can be reserved when the pixel difference between the pixel and the target initial frame is larger, otherwise, when a certain updated pixel value does not exceed the preset pixel difference threshold, the pixel is similar to the pixel of the target initial frame, the content similar to the target initial frame in the reference video frame can be eliminated by the pixel, and a difference image corresponding to the reference video frame can be obtained. It can be understood that only the images with special effects added are reserved in the difference map obtained at this time.
It will be appreciated that in a specific embodiment, the cancellation process may be performed on the content of the reference video frame similar to the target initial frame with reference to the above formula (2), so as to obtain a difference map of the reference video frame.
Further, the terminal may determine a similarity between the inspection video frame and the reference video frame based on the similarity between the difference map corresponding to the inspection video frame and the difference map corresponding to the reference video frame.
In the above embodiment, since the content similar to the target initial frame is eliminated in the difference image, and only the image with the added effect is retained, the similarity calculation is performed based on the difference image, and since redundant information is reduced, the accuracy and efficiency of the similarity calculation can be greatly improved.
In one embodiment, determining the similarity between the inspection video frame and the reference video frame based on the similarity between the difference map corresponding to the inspection video frame and the difference map corresponding to the reference video frame includes performing a histogram calculation on the difference map corresponding to the inspection video frame to obtain a first gray level histogram, performing a histogram calculation on the difference map corresponding to the reference video frame to obtain a second gray level histogram, calculating a difference in the number of pixels between the first gray level histogram and the second gray level histogram for each gray level value of each color channel, calculating the similarity between the first gray level histogram and the second gray level histogram based on the calculated difference in the number of pixels, and taking the calculated similarity as the similarity between the inspection video frame and the reference video frame.
The Histogram (Histogram) is a statistical report, and represents the data distribution by a series of vertical stripes or line segments with different heights. The data type is generally represented by the horizontal axis and the distribution is represented by the vertical axis. The gray level histogram in the embodiment of the application is used for representing gray level distribution in a digital image, and the pixel number of each gray level value in the image is plotted. The gray value is 0-255.
Specifically, the terminal calculates a histogram of a difference value diagram corresponding to the inspection video frame, counts the number of pixels of each color channel corresponding to each gray value in the inspection video frame, and obtains a gray level histogram of each color channel, and then the first gray level histogram includes three subgraphs, namely a histogram corresponding to R, a histogram corresponding to G and a histogram corresponding to B, wherein the histogram corresponding to R is used for representing gray level distribution of a red channel, the histogram corresponding to G is used for representing gray level distribution of a green channel, and the histogram corresponding to B is used for representing gray level distribution of a blue channel. As shown in fig. 5, a histogram of a difference map corresponding to a checked video frame is shown in one embodiment.
And the terminal calculates the histogram of the difference value diagram corresponding to the reference video frame, counts the pixel number of each color channel corresponding to each gray value in the reference video frame, and obtains the gray level histogram of each color channel, and then the second gray level histogram also comprises three subgraphs, namely a histogram corresponding to R, a histogram corresponding to G and a histogram corresponding to B.
Further, for each gray value of each color channel, the terminal obtains the number of pixels corresponding to the first gray histogram and the number of pixels corresponding to the second gray histogram, calculates a difference between the two pixel numbers to obtain a difference value of the pixel number, and since the difference value of the pixel number can reflect the difference between the first gray histogram and the second gray histogram, the terminal can calculate the similarity between the first gray histogram and the second gray histogram based on the calculated difference value of the pixel number.
In a specific embodiment, the terminal may calculate the similarity between the first gray histogram and the second gray histogram with reference to the following formula (3):
Wherein,The degree of similarity is indicated and,N-th data representing the first gray histogram,The nth data representing the second gray level histogram, since each gray level histogram includes three sub-graphs, each sub-graph includes 256 data, each gray level histogram includes 256×3=768 data, where each data represents the number of pixels of the current gray level value of the current color channel in the current histogram, and then the value of N is 768.
Further, the terminal calculates the similarity between the first gray level histogram and the second gray level histogram, and then uses the similarity as the similarity between the check video frame and the reference video frame.
In the embodiment, the gray level histograms corresponding to the inspection video frame and the reference video frame are obtained through histogram calculation, and then the similarity between the two histograms is calculated, so that whether two difference images are similar or not can be calculated quickly, and the efficiency is high.
In a specific embodiment, a method for inspecting video effects is provided, including the steps of:
1. The method comprises the steps that a terminal obtains an initial video and a standard video which contain trigger contents of target special effects, wherein the target special effects comprise a first target special effect and a second target special effect, the trigger contents of the first target special effect are target objects, the trigger contents of the second target special effect are target actions corresponding to the target objects, and the initial video and the standard video are generated in the following modes:
1-1, acquiring a basic video frame containing a target object, and acquiring an action video frame corresponding to a target action, wherein the target object executes the target action in the action video frame.
For example, taking the target motion as the limb motion of "stretching five fingers", as shown in fig. 6A, a schematic diagram of a base video frame is shown in fig. 6B, and as shown in fig. 6B, it can be seen that the base video frame and the motion video frame both include a human body, and the human body performs the target motion in the motion video frame.
1-2, Acquiring the video frame rate and the preset duration of the initial video, and determining the total frame number of the initial video according to the video frame rate and the preset duration of the initial video frame.
1-3, Obtaining the appearance period of the target action, and determining the first target frame number of the action video frame according to the appearance period of the target action and the video frame rate.
1-4, Obtaining a difference value between the total frame number of the initial video and the first target frame number of the action video frame, and obtaining the second target frame number of the basic video frame.
1-5, Generating a video frame of a first target frame number as a video frame within an appearance period of a target action according to the action video frame, and generating a video frame of a second target frame number as a video frame outside the appearance period of the target action according to the base video frame, so as to generate an initial video frame.
1-6, Performing special effect adding processing on the initial video frame, adding the video frame with the target special effect, and manually confirming that the added target special effect meets the corresponding effect of the target special effect to obtain the reference video frame.
2. The terminal acquires the appearance time period of the target object, and determines the appearance time period of the target object as the effective range of the first target special effect in the standard video.
It can be appreciated that in this embodiment, since the initial video is generated according to the base video frame and the action video frame, both of which include the target object, the effective range of the first target special effect is the whole video.
3. The terminal acquires the occurrence period of the target action, and determines a preset time after the occurrence period of the target action as the effective range of the target special effect in the standard video.
Wherein the occurrence period of the target action is known in advance, and the terminal can directly acquire the target action.
Specifically, after the occurrence period of the target action is obtained, determining the end unit time corresponding to the target action according to the occurrence period of the target action, obtaining the preset time length of the second target special effect, and determining a preset time length which takes the next unit time of the end unit time corresponding to the target action as the starting time and is matched with the preset time length as the effective range of the second target special effect.
4. The terminal extracts a video frame matched with the effective range of the second target special effect from the standard video as a reference video frame.
It can be understood that, in this embodiment, since the effective range of the first target effect is the whole video, the first target effect is effective within the effective range of the second target effect, so that the video frame matching with the effective range of the second target effect includes the effect effects of the first target effect and the second target effect at the same time, and can be used as the reference video frame of the two effect effects.
For example, as shown in fig. 6C, a video frame matching the effective range of the first target effect but not matching the effective range of the second target effect is a video frame including the first target effect, i.e. when a face is recognized, a "two-row hat" pattern is added, and in addition, a video frame matching the effective range of the second target effect is a video frame in which a "five finger opening" is recognized, and a pumpkin appears in the palm, and the effect of the first target effect (two rows of hats) and the effect of the second target effect (pumpkin in the palm) are included.
5. The terminal takes the effect corresponding to the target special effect as a waiting effect, detects the video content of the initial video, and performs special effect adding processing on the initial video when the triggering content corresponding to the target special effect is detected, so as to obtain the special effect video to be checked.
6. The terminal extracts a video frame matched with the effective range of the second target special effect from the special effect video to serve as a first check video frame.
7. And the terminal compares the first check video frame with the reference video frame to obtain a first comparison result.
Specifically, the terminal calculates the similarity between the first check video frame and the reference video frame as a first comparison result, and the steps are as follows:
And 7-1, acquiring a target initial frame corresponding to the target special effect.
7-2, Updating the pixel value of the first check video frame by the pixel difference value of the first check video frame and the target initial frame.
And 7-3, according to the magnitude relation between the updated pixel value of the first inspection video frame and the preset pixel difference threshold value, eliminating the content similar to the target initial frame in the first inspection video frame so as to obtain a difference diagram corresponding to the first inspection video frame.
And 7-4, updating the pixel value of the reference video frame through the pixel difference value of the reference video frame and the target initial frame.
7-5, Eliminating the content similar to the target initial frame in the reference video frame according to the magnitude relation between the updated pixel value of the reference video frame and the preset pixel difference threshold value so as to obtain a difference value diagram corresponding to the reference video frame;
and 7-6, carrying out histogram calculation on the difference value diagram corresponding to the first check video frame to obtain a first gray level histogram.
And 7-7, carrying out histogram calculation on the difference value diagram corresponding to the reference video frame to obtain a second gray level histogram.
7-8, Calculating a pixel number difference between the first gray histogram and the second gray histogram for each gray value of each color channel, and calculating a similarity between the first gray histogram and the second gray histogram based on the calculated pixel number difference.
And 7-9, taking the calculated similarity as the similarity between the first check video frame and the reference video frame.
8. The terminal acquires the starting time of the occurrence period of the target action, determines the last time of the starting time as the end time, and determines a period of preset time ending with the end time as the comparison range of the second target special effect in the standard video.
9. And the terminal extracts the video frame matched with the contrast range of the second target special effect from the standard video to obtain a contrast video frame.
10. And the terminal extracts a video frame matched with the contrast range of the second target special effect from the special effect video to obtain a second check video frame.
11. And the terminal compares the comparison video frame with the second check video frame to obtain a second comparison result.
Specifically, the terminal may calculate the similarity between the reference video frame and the second check video frame as the second comparison result with reference to the steps 7-1 to 7-9 described above.
12. And when at least one of the first comparison result and the second comparison result does not exceed the similarity threshold, determining that the special effect addition of the special effect video is abnormal.
The application also provides an application scene, which applies the method for checking the video special effect. Specifically, the application of the method for checking the video special effects in the application scene is as follows:
In the application scene, a video editing APP is arranged on the terminal, and can be added with a plurality of target special effects, wherein one target special effect is to add 'rabbit ear' when a human face is recognized, as shown in fig. 7A, and trigger 'love' animation effect when a blink action is detected, as shown in fig. 7B. The terminal performs the following steps at a preset time (e.g. nine points) of each day to verify that the APP can normally add the target special effect:
1. And acquiring an initial video and a standard video through the APP. The generation of the initial video and the acquisition of the standard video may refer to the description of the above embodiments, and the present application is not repeated here.
It should be noted that, in the application scenario, the duration of the initial video is 10 seconds, the video frame rate is 30 frames per second, the basic video frames used in generating the initial video frames are shown in fig. 6A, the action video frames are shown in fig. 8A, the time of occurrence of blink is 3.0 to 3.2 seconds of the initial video, that is, the blink action is completed in 3.2 seconds, and then the "love" animation effect is triggered from 3.3 seconds in the standard video.
2. And carrying out special effect adding processing on the initial video by taking the target special effect corresponding effect as a standby effect through the APP, so as to obtain the special effect video to be inspected.
3. The 3.3 th second, 99 th frame, is extracted from the standard video and the special effect video respectively to perform 'contrast after adding effect'.
The specific comparison method is identical to the above two times, and the following description will be made by taking "comparison after effect addition" as an example.
4. And respectively differencing the 99 th frame and the basic video frame in the standard video and the special effect video, when the difference value is larger than a preset threshold value, reserving the pixel, and when the difference value is smaller than the preset threshold value, setting the value of the pixel to 0, thereby obtaining a 'difference value diagram' with only additive effect, as shown in fig. 7C, and as can be seen, only rabbit ears and hearts are found in the diagram.
5. And respectively carrying out histogram calculation on the difference value graphs corresponding to the 99 th frame in the standard video and the special effect video to obtain respective corresponding histograms. Specific calculation methods refer to the descriptions in the above embodiments, and the present application is not repeated here.
6. And calculating the similarity of the two histograms obtained by calculation, and when the similarity is larger than a preset threshold value, indicating that the special effect in the special effect video is consistent with that in the standard video, namely that the special effect is normal.
7. And respectively extracting the 2.9 seconds, namely the 2.9 times 30=87 frames, from the standard video and the special effect video to perform 'comparison before adding effect', and when the similarity obtained by the 'comparison before adding effect' is larger than a preset threshold value, indicating that the time of adding the effect is accurate, namely, the APP can correctly identify the blinking action and is added with the love animation triggered by the blinking animation at the correct time, and indicating that the special effect adding function of the APP to the target special effect is normal.
It should be understood that, although the steps in the flowcharts of fig. 2-4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-4 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 8, a video special effect verification apparatus 800 is provided, which may employ a software module or a hardware module, or a combination of both, as part of a computer device, and specifically includes:
The reference video frame acquisition module 802 is configured to acquire a reference video frame extracted from a standard video that meets a corresponding effect of a target special effect, where the standard video is obtained by adding the target special effect to an initial video;
the special effect adding processing module 804 is configured to perform special effect adding processing on the initial video with the target special effect corresponding effect as a waiting effect, so as to obtain a special effect video to be checked;
the test video frame extraction module 806 is configured to extract a test video frame that matches the effective range in the special effect video;
And the comparison module 808 is configured to compare the test video frame with the reference video frame to obtain a first comparison result, and obtain a special effect test result of the special effect video according to the first comparison result.
According to the video effect checking device, the reference video frame extracted from the standard video which accords with the target effect is obtained by adding the target effect to the initial video, and the reference video frame is matched with the effective range of the target effect, so that the reference video frame can accurately show the effect corresponding to the target effect, the terminal further takes the effect corresponding to the target effect as the expected effect, the initial video is subjected to effect adding processing to obtain the effect video to be checked, the check video frame matched with the effective range in the effect video is extracted, and under the condition that the effect function of the terminal is normal, the extracted check video frame can accurately show the effect corresponding to the target effect, therefore, the check video frame and the reference video frame can be compared to obtain a first comparison result, and the effect checking result of the effect video is obtained according to the first comparison result, thereby realizing automatic checking of the effect adding function.
In one embodiment, the special effect adding processing module is further configured to detect video content of the initial video with the effect corresponding to the target special effect as a desired effect, and perform special effect adding processing on the initial video when the trigger content corresponding to the target special effect is detected, so as to obtain a special effect video to be checked.
In one embodiment, the triggering content corresponding to the target special effect comprises a target object, and the device further comprises an effective range determining module, which is used for acquiring the appearance time period of the target object and determining the appearance time period of the target object as the effective range of the target special effect.
In one embodiment, the trigger content corresponding to the target special effect includes a target action corresponding to the target object, an effective range determining module further configured to obtain an occurrence period of the target action, and determine a period of time after the occurrence period of the target action as an effective range of the target special effect.
In one embodiment, the target action includes a target five-sense organ action of the target object, an effective range determining module further configured to obtain an appearance period of the target five-sense organ action, determine an end unit time corresponding to the target five-sense organ action according to the appearance period of the target five-sense organ action, obtain a preset duration of the target special effect, and determine a preset time with a next unit time of the end unit time corresponding to the target five-sense organ action as a starting time and a duration matched with the preset duration as an effective range of the target special effect.
In one embodiment, the trigger content corresponding to the target special effect comprises a target limb action corresponding to the target object, an effective range determining module, which is further used for obtaining an appearance period of the target limb action and obtaining a video frame rate of an initial video, determining a frame number of an end video frame of the target limb action according to the appearance period of the target limb action and the video frame rate of the initial video frame, determining a frame number of a next video frame according to the frame number of the end video frame of the target limb action, obtaining a starting frame number of the target special effect, obtaining a preset duration of the target special effect, determining the number of the video frames of the target special effect according to the preset duration and the video frame rate of the initial video frame, determining a video frame number range of the target special effect according to the starting frame number of the target special effect and the video frame number of the target special effect, and taking the video frame number range of the target special effect as the effective range of the target special effect.
In one embodiment, the triggering content corresponding to the target special effect comprises a target action corresponding to the target object, and the device further comprises an initial video generation module, wherein the initial video generation module is used for acquiring a basic video frame containing the target object, acquiring an action video frame corresponding to the target action, executing the target action by the target object in the action video frame, and generating an initial video according to the basic video frame and the action video frame.
In one embodiment, the initial video generating module is further configured to obtain a video frame rate and a preset duration of an initial video, determine a total frame number of the initial video according to the video frame rate and the preset duration of the initial video, obtain an occurrence period of a target action, determine a first target frame number of the action video frame according to the occurrence period of the target action and the video frame rate, obtain a difference value between the total frame number of the initial video and the first target frame number of the action video frame, obtain a second target frame number of a base video frame, generate a video frame of the first target frame number according to the action video frame as a video frame within the occurrence period of the target action, and generate a video frame of the second target frame number according to the base video frame as a video frame outside the occurrence period of the target action, so as to generate the initial video frame.
In one embodiment, the device further comprises a comparison module used for obtaining comparison video frames extracted from the standard video, wherein the comparison video frames are matched with the comparison range of the target special effects in the standard video, the comparison range of the target special effects is before the occurrence range of the trigger content in the initial video, the detection video frames matched with the comparison range in the special effect video are extracted, the detection video frames matched with the comparison range are compared with the comparison video frames to obtain a second comparison result, and the comparison module is further used for obtaining the special effect detection result of the special effect video according to the first comparison result and the second comparison result.
In one embodiment, the initial video comprises trigger content corresponding to a target special effect, the trigger content corresponding to the target special effect comprises target actions corresponding to the target objects, and the device further comprises a comparison time determining module, wherein the comparison time determining module is used for obtaining the occurrence time of the target actions, and a period of preset time before the occurrence time of the target actions is determined to be a comparison range of the target special effect.
In one embodiment, the comparison module is further used for calculating the similarity between the test video frame and the reference video frame, the calculated similarity is used as a comparison result of the test video frame and the reference video frame to obtain a first comparison result, and the obtaining of the special effect test result of the special effect video according to the first comparison result comprises the step of determining the special effect test result of the special effect video according to the magnitude relation between the similarity and a preset similarity threshold value.
In one embodiment, the comparison module is further configured to obtain a target initial frame corresponding to the target special effect, update a pixel value of the inspection video frame according to a pixel difference value of the inspection video frame and the target initial frame, perform elimination processing on content similar to the target initial frame in the inspection video frame according to a size relationship between the updated pixel value of the inspection video frame and a preset pixel difference threshold value to obtain a difference map corresponding to the inspection video frame, update a pixel value of the reference video frame according to a pixel difference value of the reference video frame and the target initial frame, perform elimination processing on content similar to the target initial frame in the reference video frame according to a size relationship between the updated pixel value of the reference video frame and the preset pixel difference threshold value to obtain a difference map corresponding to the reference video frame, and determine a similarity between the inspection video frame and the reference video frame based on a similarity between the difference map corresponding to the inspection video frame and the difference map corresponding to the reference video frame.
In one embodiment, the comparison module is further configured to perform a histogram calculation on a difference map corresponding to the inspection video frame to obtain a first gray level histogram, perform a histogram calculation on a difference map corresponding to the reference video frame to obtain a second gray level histogram, calculate a pixel number difference between the first gray level histogram and the second gray level histogram for each gray level of each color channel, calculate a similarity between the first gray level histogram and the second gray level histogram based on the calculated pixel number difference, and use the calculated similarity as a similarity between the inspection video frame and the reference video frame.
For specific limitations of the video special effect checking apparatus, reference may be made to the above limitation of the video special effect checking method, and no further description is given here. The modules in the video special effect checking device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a method of verifying video effects. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (15)

CN202110095756.9A2021-01-252021-01-25 Video special effects testing method, device, computer equipment and storage mediumActiveCN113596436B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110095756.9ACN113596436B (en)2021-01-252021-01-25 Video special effects testing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110095756.9ACN113596436B (en)2021-01-252021-01-25 Video special effects testing method, device, computer equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN113596436A CN113596436A (en)2021-11-02
CN113596436Btrue CN113596436B (en)2025-04-25

Family

ID=78238121

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110095756.9AActiveCN113596436B (en)2021-01-252021-01-25 Video special effects testing method, device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN113596436B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111432205A (en)*2020-04-172020-07-17杭州趣维科技有限公司Automatic testing method for video synthesis correctness

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110838166B (en)*2019-10-212024-02-13腾讯科技(深圳)有限公司Specific data detection method, device, equipment and storage medium
CN110913205B (en)*2019-11-272022-07-29腾讯科技(深圳)有限公司Video special effect verification method and device
CN111597984B (en)*2020-05-152023-09-26北京百度网讯科技有限公司Label paper testing method, device, electronic equipment and computer readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111432205A (en)*2020-04-172020-07-17杭州趣维科技有限公司Automatic testing method for video synthesis correctness

Also Published As

Publication numberPublication date
CN113596436A (en)2021-11-02

Similar Documents

PublicationPublication DateTitle
CN109359548B (en)Multi-face recognition monitoring method and device, electronic equipment and storage medium
CN110728209B (en)Gesture recognition method and device, electronic equipment and storage medium
US11238272B2 (en)Method and apparatus for detecting face image
CN109214343B (en)Method and device for generating face key point detection model
CN107679490B (en)Method and apparatus for detection image quality
CN106897658B (en) Method and device for identifying living body of human face
CN112395979B (en)Image-based health state identification method, device, equipment and storage medium
CN107622240B (en) Face detection method and device
WO2020199611A1 (en)Liveness detection method and apparatus, electronic device, and storage medium
US10614289B2 (en)Facial tracking with classifiers
CN112597941A (en)Face recognition method and device and electronic equipment
CN110473232A (en)Image-recognizing method, device, storage medium and electronic equipment
CN108229330A (en)Face fusion recognition methods and device, electronic equipment and storage medium
CN111368672A (en)Construction method and device for genetic disease facial recognition model
CN109299658B (en)Face detection method, face image rendering device and storage medium
CN113536262B (en) Unlocking method, device, computer equipment and storage medium based on facial expression
CN110852704B (en) Attendance method, system, equipment and medium based on intensive micro-face recognition
CN115223239B (en)Gesture recognition method, gesture recognition system, computer equipment and readable storage medium
CN109766785A (en) A method and device for detecting a living body of a human face
CN112241667A (en)Image detection method, device, equipment and storage medium
CN112052746A (en) Target detection method, apparatus, electronic device and readable storage medium
CN114038045A (en)Cross-modal face recognition model construction method and device and electronic equipment
CN114067394A (en)Face living body detection method and device, electronic equipment and storage medium
CN110363111A (en)Human face in-vivo detection method, device and storage medium based on lens distortions principle
CN110008922A (en)Image processing method, unit, medium for terminal device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40055392

Country of ref document:HK

SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp