Movatterモバイル変換


[0]ホーム

URL:


CN110855904B - Video processing method, electronic device and storage medium - Google Patents

Video processing method, electronic device and storage medium
Download PDF

Info

Publication number
CN110855904B
CN110855904BCN201911172687.6ACN201911172687ACN110855904BCN 110855904 BCN110855904 BCN 110855904BCN 201911172687 ACN201911172687 ACN 201911172687ACN 110855904 BCN110855904 BCN 110855904B
Authority
CN
China
Prior art keywords
video
target
segments
spliced
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911172687.6A
Other languages
Chinese (zh)
Other versions
CN110855904A (en
Inventor
吴恒刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongheng Ruichen Technology Co ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp LtdfiledCriticalGuangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911172687.6ApriorityCriticalpatent/CN110855904B/en
Publication of CN110855904ApublicationCriticalpatent/CN110855904A/en
Priority to PCT/CN2020/131076prioritypatent/WO2021104242A1/en
Application grantedgrantedCritical
Publication of CN110855904BpublicationCriticalpatent/CN110855904B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种视频处理方法、电子装置和存储介质。视频处理方法包括:识别视频中满足预定条件的素材片段;确定素材片段对应的素材标签;和根据素材标签和预设的数字模板拼接素材片段以生成目标视频。本申请实施方式的视频处理方法,通过设置预定条件来识别视频的素材片段,可以识别出视频中较为精彩或者较能引起人们兴趣的部分作为素材片段,如此,使得拼接素材片段生成的目标视频的效果较佳。

Figure 201911172687

The present application discloses a video processing method, an electronic device and a storage medium. The video processing method includes: identifying material segments in the video that meet predetermined conditions; determining material labels corresponding to the material segments; and splicing the material segments according to the material labels and a preset digital template to generate a target video. In the video processing method of the embodiment of the present application, by setting predetermined conditions to identify the material clips of the video, the more exciting or interesting parts of the video can be identified as the material clips. In this way, the target video generated by splicing the material clips can be The effect is better.

Figure 201911172687

Description

Video processing method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video processing method, an electronic device, and a storage medium.
Background
The related art can scan a plurality of videos collected by a camera and splice the videos into a spliced video. However, not all the video is wonderful or interesting, so that the parts or segments of the video which are flat or difficult to interest may be coded into the spliced video, resulting in the spliced video being difficult to achieve the desired level.
Disclosure of Invention
The application provides a video processing method, an electronic device and a storage medium.
The embodiment of the application provides a video processing method, which comprises the following steps:
identifying material segments meeting preset conditions in the video;
determining a material label corresponding to the material segment; and
and splicing the material segments according to the material labels and a preset digital template to generate a target video.
The electronic device of the embodiment of the application comprises a memory and a processor, wherein the processor is connected with the memory and is used for executing the video processing method.
A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the video processing method described above.
In the video processing method, the electronic device and the storage medium of the embodiment of the application, the material segments of the video are identified by setting the predetermined conditions, and the parts which are more wonderful or can arouse people's interest in the video can be identified as the material segments, so that the target video generated by splicing the material segments has a better effect.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application;
FIG. 2 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a scene schematic diagram of a video processing method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a video processing method according to another embodiment of the present application;
fig. 5 is a schematic view of a scene of a video processing method according to another embodiment of the present application;
FIG. 6 is a schematic flow chart of a video processing method according to another embodiment of the present application;
fig. 7 is a schematic view of a scene of a video processing method according to another embodiment of the present application;
FIG. 8 is a schematic flow chart of a video processing method according to yet another embodiment of the present application;
fig. 9 is a schematic view of a video processing method according to still another embodiment of the present application;
10-12 are flow diagrams of video processing methods according to further embodiments of the present application;
fig. 13 is a schematic view of a video processing method according to still another embodiment of the present application;
14-18 are schematic flow diagrams of video processing methods according to further embodiments of the present application;
fig. 19 is a schematic view of a scene of a video processing method according to another embodiment of the present application;
FIG. 20 is a schematic flow chart of a video processing method according to yet another embodiment of the present application;
fig. 21 is a schematic view of a video processing method according to another embodiment of the present application;
FIG. 22 is a schematic flow chart of a video processing method according to yet another embodiment of the present application;
23-32 are schematic diagrams of video processing methods according to further embodiments of the present application;
fig. 33 is another block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Referring to fig. 1, a video processing method is provided in an embodiment of the present application. The video processing method comprises the following steps:
step S24: identifying material segments meeting preset conditions in the video;
step S26: determining a material label corresponding to the material segment; and
step S28: and splicing the material segments according to the material labels and a preset digital template to generate the target video.
According to the video processing method, the material segments of the video are identified by setting the preset conditions, and the parts which are more wonderful or can arouse people's interest in the video can be identified as the material segments, so that the target video generated by splicing the material segments has a better effect.
Specifically, referring to fig. 2, the video processing method can be used for processing a video captured by thecamera 110 of theelectronic device 100. Theelectronic apparatus 100 may be any of various types of computer system devices that are mobile or portable and perform wireless communication. Further, theelectronic apparatus 100 may be a mobile phone, a portable game device, a laptop computer, a Personal Digital Assistant (PDA), a tablet computer (PAD), a portable internet device, a wearable device, a vehicle-mounted terminal, a navigator, a music player, a data storage device, and the like.
Note that, for convenience of description, the video processing method of theelectronic device 100 according to the embodiment of the present application is explained by taking theelectronic device 100 as a mobile phone as an example. This is not intended to limit the specific form of theelectronic device 100.
In step S24, the predetermined condition may be stored in theelectronic device 100 in advance, or may be set by the user, and the specific source of the predetermined condition is not limited herein.
In addition, one video may be identified, or a plurality of videos may be identified. The number of material segments identified from each video may be one or more. For example 2, 4, 5, 11 or other values. The number of videos to be identified is not limited, and the number of material segments identified from each video is not limited.
In addition, a segment of material may refer to a portion of a video that is more exciting or interesting.
In step S26, one material segment may correspond to one material tag or a plurality of material tags. For example corresponding to 2, 3, 4 or other numbers of material tags. The specific correspondence relationship between the material segments and the material tags is not limited herein.
Further, the material label may include at least one of a label of a person, a label of a sport, a label of a scene, a label of an animal, and a label of a plant. Therefore, various material labels are provided, the material segments are classified through the various material labels, and the classification of the material segments is more accurate.
Further, the tag of the character class may include at least one of a man, a woman, an old person, and a child.
Still further, the sports-like tag may include at least one of playing basketball, playing football, swimming, and running.
Still further, the scene-like tag may include at least one of a study, a soccer field, a basketball court, and a runway.
Therefore, the content of the material label can be further enriched, and the accuracy of material segment classification is further ensured. The specific content of the material label is not limited herein.
In one example, the material tab of the material segment a is "man". In another example, the material tags of the material segment b are "man", "basketball". In yet another example, the material tags of the material segment c are "man", "basketball court".
Further, "determining a material tag corresponding to the material segment" may be associating the start and end time of the material segment in the video with the corresponding tag. In other words, after step S26, the video processing method includes: determining the starting and stopping time of a material segment corresponding to the label in the video; the tag is associated with a start-stop time.
Therefore, the material segments do not need to be edited from the video, and only the starting and stopping moments of the material segments in the video need to be marked, so that the material segments and the labels can be simply and conveniently associated, and the processing efficiency is improved.
In addition, step S26 may include: identifying the content of the material segment to obtain an identification result; and determining a corresponding material label according to the identification result. Specifically, the content of the material segment can be identified by deep learning. In addition, the corresponding material label can be determined from the correlation of the label, the multi-level information of the video and the attention model in time according to the identification result. Therefore, the corresponding material label can be simply and conveniently determined according to the material segment.
In step S28, the digital template may refer to a series of method policies for processing the video material, which policies may be embodied in the program code. The digital template is used to help the user quickly assemble the assets and generate a target video. The digital template may include template labels, filters, animated stickers, audio rhythm points, and the like. The specific content of the digital template is not limited herein.
In this embodiment, the template tags of the digital templates form a template tag set, and the template tags of different digital templates are different, so that different target videos can be obtained by splicing the material segments according to different digital templates.
In the present embodiment, step S24 includes:
and identifying material sections meeting preset conditions in the video when the video is shot.
Thus, data output by thecamera 110 does not need to be encoded into video and decoded, power consumption can be reduced, and processing efficiency can be improved. It is understood that in the related art, after data output from the camera is encoded and stored as a video, the video is decoded and identified, so as to identify a highlight or determine a label of the video. Thus, the power consumption is large, and the processing efficiency is low.
In the present embodiment, when a video is captured, multiple threads are used, and data output by thecamera 110 is encoded into the video for storage, and a material segment is identified according to the data output by thecamera 110, a material tag corresponding to the material segment is determined, and the material segment is spliced according to the material tag and a preset digital template to generate a target video.
That is, when the video shooting is completed, identifying the material section, determining the material label corresponding to the material section, and even generating the target video are completed. Thus, the determination speed of the material segments is high, the power consumption of theelectronic device 100 is low, and the efficiency of editing and splicing is high. The user experience can be improved.
Thecamera 110 may start capturing video according to the user's instruction. Specifically, the user may input a control instruction to theelectronic apparatus 100 to open the camera application and start capturing a video by means of a gesture, a key, or an icon. Theelectronic device 100 may include adisplay 20, and the icons may be displayed via thedisplay 20.
When the operating system of theelectronic device 100 is Android system (Android), the data output by thecamera 110 may be encoded into video for storage through mediacodec or an open-source Ffmpeg library. Mediacodec is a tool class for encoding and decoding videos of Android itself. The Ffmmpeg is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams, which provides a complete solution to record, convert, and stream audio and video.
In this embodiment, the format of the data output by thecamera 110 may be YUV format. Further, the format of the data output by thecamera 110 may be YUV420 format. The YUV format is a color coding format, including NV21 andNV 12. The data output by thecamera 110 in YUV format can utilize human eye's perception of chrominance to reduce the U and V color channels to reduce the video image size. Thus, the quality of the data output by thecamera 110 is ensured, the size of the data output by thecamera 110 can be reduced, and the transmission speed and the processing speed are improved.
Of course, the format of the data output by thecamera 110 may also be RGB format. The specific format of the data output by thecamera 110 is not limited herein.
Further, when the operating system of theelectronic device 100 is Android system (Android), the data output by theCamera 110 may be obtained by calling back and forth through an imageReader class in an Application Programming Interface (API)Camera 2. In this way, the data output by thecamera 110 can be simply and conveniently acquired. In addition, the data output by thecamera 110 may be image frames.
Of course, in other embodiments, the video processed by the video processing method is the video stored in theelectronic device 100 after being captured. In other words, step S24 includes: and identifying material sections meeting preset conditions in the video after video shooting.
Therefore, the video stored locally is identified, the material segments are richer, and the effect of the target video is better. Specifically, the user may view videos stored on theelectronic device 100 through an album application of theelectronic device 100.
Referring to fig. 3, in one example, the user clicks the video recording icon of theelectronic device 100, and theelectronic device 100 starts shooting a character to play a basketball to obtain a video. During shooting, theelectronic device 100 identifies the material segment L1 in the video, and determines that the material label corresponding to the material segment is "basketball".
The digital template of theelectronic device 100 includes a motion template Z, and the template label of the motion template Z is a motion-like label. Through the foregoing method, theelectronic device 100 also identifies the material segment L2 and the material segment L3. The material label of the material segment L2 is "swim", and the material label of the material segment L3 is "run".
And "basketball", "swimming" and "running" are all sports-like tags corresponding to the template tags of the digital template Z, so the material segment L1, the material segment L2 and the material segment L3 can be spliced to automatically generate the target video L, i.e., the wonderful story. Note that the target video L at this time is only in a preview form.
That is, the material segments L1-L3 are simply played according to the material tags and the start-stop time sequence, and the material segments L1-L3 are not cut out from the respective corresponding videos, spliced into the target video, and stored.
After the video shooting is completed, theelectronic device 100 may display the icons of "video" and "wonderful story", and the user clicks the icon of "wonderful story", so that theelectronic device 100 plays the material segments L1-L3. If the user feels satisfied, the user can click the "generated" icon, and at this time, theelectronic device 100 clips, splices and stores the material segments L1-L3 from the respective corresponding videos as the target video.
Referring to fig. 4, in some embodiments, step S24 includes:
step S242: and taking the segments with the definition larger than a preset definition threshold value in the video as material segments.
Therefore, the material fragments are ensured to be relatively clear fragments, so that when the target video is generated by utilizing the material fragments subsequently, the definition of the target video is relatively high, and the quality of the target video is improved. In this embodiment, the "segment of the video with the definition greater than the preset definition threshold" may refer to a segment of each frame with the definition greater than the definition threshold.
Specifically, the sharpness may be determined according to the resolution. Of course, the sharpness may also be determined according to a preset evaluation function. For example: brenner gradient function, Tenengrad gradient function, Laplacian gradient function, and the like. The specific manner of determining sharpness is not limited herein.
The sharpness threshold may be pre-stored in theelectronic device 100 or may be adjusted and set by the user. The specific source of the sharpness threshold is not limited herein.
Referring to fig. 5, during video shooting, the data output by thecamera 110 are a to-be-processed image I0, a to-be-processed image I1, a to-be-processed image I2, a to-be-processed image I3, a to-be-processed image I4, a to-be-processed image I5, a to-be-processed image I6, and a to-be-processed image I7. Identifying images I0-I7 to be processed to determine the definition of the images I0-I7 to be processed, and determining that the definition of the images I4-I6 to be processed is greater than a preset definition threshold value, and then taking the corresponding segment of the images I4-I6 to be processed as a material segment L1.
Referring to fig. 6, in some embodiments, step S24 includes:
step S244: determining a target frame image in a video;
step S246: and taking the segments with the number proportion of the target frame images in the video larger than a preset proportion threshold value as material segments.
Therefore, the quantity proportion of the material segments meeting the target frame images is limited, and the quality of the material segments can be ensured. The target frame image refers to an image satisfying a set condition. In addition, since there are usually fewer segments that satisfy the setting condition for each frame in the video, if a segment that satisfies the setting condition for each frame in the video is taken as a material segment, the number of the material segments is easily small, and even such a material segment cannot be found in the video. Therefore, the proportion of the target frame image meeting the set condition in the material segment can be ensured through a proper proportion threshold value, and the difficulty in finding the material segment is avoided.
Specifically, the target frame image may be an image whose sharpness is greater than a sharpness threshold; the target frame image may be an image including a human face; the target frame image may be an image including a pet. The specific form of the setting condition is not limited herein.
The preset ratio threshold may be stored in theelectronic device 100 in advance, or may be adjusted by the user according to the need. The source of the scaling threshold is not limited herein. The ratio threshold is, for example, 50%, 60%, 65%, 70%, or other values, and the specific values of the ratio threshold are not limited herein.
Referring to fig. 7, in an example, at the time of video shooting, data output by thecamera 110 is a to-be-processed image I0, a to-be-processed image I1, a to-be-processed image I2, a to-be-processed image I3, a to-be-processed image I4, a to-be-processed image I5, a to-be-processed image I6, and a to-be-processed image I7. Identifying images I0-I7 to be processed, determining that the proportion of the number of recognizable human faces in the images I3-I6 to be processed is 100% and is greater than a proportion threshold value 80%, and taking the segment corresponding to the images I3-I6 to be processed as a material segment L1.
In another example, the proportional threshold is 80%. The setting condition may be that a human face is included, that is, the target frame image is an image including a human face. The video has a segment of 100 frames, the middle 10 frames can not identify the face, the other 90 frames can identify the face, and the first frame and the last frame can both identify the face. At this time, the ratio of the target frame image is 90% and is larger than the ratio threshold. The segment may be treated as a material segment.
It is understood that, due to the motion of the subject or the shake of theelectronic apparatus 100, even if the subject is photographed at all times, there is a possibility that some frames cannot recognize the face due to the blur. The video processing method of the embodiment not only ensures the quality of the material segments through the proportion threshold, but also considers the practical situation, so that the determination of the material segments is more reasonable.
Referring to fig. 8, in some embodiments, a video processing method includes:
step S22: determining a label of a video segment in a video, wherein the video segment comprises a material segment;
step S26 includes:
step S262: and taking the label of the video segment corresponding to the material segment as a material label.
Referring to fig. 9, in one example, the user clicks the video recording icon of theelectronic device 100, and theelectronic device 100 starts to capture a video. In video shooting, the data output by thecamera 110 are a to-be-processed image I0, a to-be-processed image I1, a to-be-processed image I2, a to-be-processed image I3, a to-be-processed image I4, a to-be-processed image I5, a to-be-processed image I6, and a to-be-processed image I7. Identifying images I0-I7 to be processed, and determining that the shot person does not play a basketball in the video corresponding to the image I0 to be processed; determining that the shot person plays basketball in the videos corresponding to the images I1-I6 to be processed; and determining that the shot person does not play basketball in the video corresponding to the image I7 to be processed. Therefore, the labels of the video segment L01 corresponding to the images to be processed I1-I6 are determined as follows: the basketball playing device is used for playing basketball.
And the time of the image to be processed I1 in the video is 00:00:02, and the time of the image to be processed I6 in the video is 00:00:08, the start-stop time of the video segment L01 corresponding to the label playing basketball can be determined to be 00:00:02 and 00:00:08 respectively. Then, 00:00:02 and 00:00:08 are associated with the basketball playing tag.
Since the video segment L01 includes the material segment L1. The material label of the material segment L1 is: the basketball playing device is used for playing basketball.
Referring to fig. 10, in some embodiments, step S22 includes:
step S222: determining a video segment in a video;
step S224: acquiring a preset label set, wherein the label set comprises a plurality of labels to be selected;
step S226: and taking the to-be-selected tag matched with the video segment in the tag set as the tag of the video segment.
Therefore, the label of the video segment in the video is determined through the preset label set, so that the determination of the label of the video segment is based, the disorder of the label of the video segment is avoided, and the disorder of the material label is avoided. When the material segments are spliced according to the material labels and the digital template to generate the target video, the corresponding material segments can be quickly and accurately found through the labels, and the quality of the target video is improved.
Specifically, in step S222, one video segment may be determined in one video, and a plurality of video segments may also be determined in one video. In the case where a plurality of video segments are defined in one video, the adjacent two video segments may be continuous or discontinuous. In other words, in two adjacent video segments, it can be the end point of the previous video segment, i.e. the start point of the next video segment; or there may be a video segment between the end of the previous video segment and the start of the next video segment.
In step S224, the tags in the tag set may be preset by the manufacturer, or may be added, modified, or deleted by the user. The specific form of the tag is as described above, and is not described herein again to avoid redundancy.
In step S226, for each video segment, a matching value of each tag in the tag set with the video segment can be determined, and the tag with the largest matching value is taken as the tag of the video segment. In this way, the tagging of video segments can be made as accurate as possible.
Of course, for each video segment, each tag in the tag set may also be sequentially matched with the video segment, and when a tag having a matching value with the video segment greater than a preset matching threshold is matched, the tag is used as the tag of the video segment. Therefore, each label in the label set does not need to be matched with the video segment, the accuracy of the label of the video segment can be ensured through the matching threshold, and the matching time can be saved.
Referring to fig. 11, in some embodiments, step S24 includes:
step S248: when the label of the video segment is a motion label, determining the target action of the video segment according to the label of the video segment;
step S249: and taking the segment of the video segment comprising the target action as a material segment.
In this way, when the tag corresponding to the video is a motion-like tag, the segment including the target action is used as the material segment, so that the impact and the attraction of the material segment can be stronger. Therefore, when the target video is generated by utilizing the material segments subsequently, the target video can be more wonderful and has higher quality.
Specifically, the labels of the sports class are, for example: playing basketball, playing football, swimming, shooting, etc. The specific form of the sports-type tag is not limited herein. Theelectronic device 100 may store a corresponding relationship between the motion class tag and the target motion in advance. Thus, in step S248, the target action may be determined according to the preset relationship and the tag.
In step S249, the phrase "segment including the target action" means that the target action can be recognized by processing each frame of the segment.
In one example, the tag is for basketball, and the target action is for shooting; in another example, the tag is for playing basketball and the target moves as a dunk; in yet another example, the tag is a soccer kick and the target action is a goal shoot; in yet another example, the tag is swimming and the target action is rolling the body.
In the example of fig. 9, the video segment L01 labeled "basketball," can determine that the target action is a shot, and therefore, the segment including the shot, i.e., the segment corresponding to the to-be-processed images I4-I6, is taken as the material segment L1.
And the time of the image to be processed I4 in the video is 00:00:06, and the time of the image to be processed I6 in the video is 00:00:08, the starting and ending times of the material segment L1 can be determined to be 00:00:06 and 00:00:08, respectively. The 00:00:06 and 00:00:08 tags may then be associated with the basketball playing label.
Referring to fig. 12, in some embodiments, the digital template includes a template tag, and step S28 includes:
step S282: taking a material label corresponding to the template label as a target label;
step S284: determining fragments to be spliced from the material fragments corresponding to the target label;
step S286: and generating a target video according to the segments to be spliced.
Therefore, the method and the device realize splicing of the material segments according to the material labels and the digital template to generate the target video, so that the content of the target video is more wonderful and unified, and the quality of the target video is improved. It can be understood that, since the segment to be spliced is determined from the material segment corresponding to the target tag, and the target tag corresponds to the template tag, the segment to be spliced corresponds to the template tag, so that the style and content of the target video tend to be uniform based on the template tag. In addition, the material segments meet the preset conditions, and the identified material segments are more wonderful or more interesting parts in the video, so that the target video generated by splicing the material segments has better effect.
In step S282, the "material tag corresponding to the template tag" may refer to the same material tag as the template tag. For example, the template tag is "basketball," the material tags include "basketball," "pet," and "travel," wherein "basketball" is the target tag.
The "material label corresponding to the template label" may also refer to a material label lower than the template label. For example, the template label is "sports", and the material label includes "basketball," "pet," "travel," "swimming," "running," wherein "basketball," "swimming," and "running" are the target labels.
The "material label corresponding to the template label" may also refer to a material label related to the template label. For example, the template label is "basketball," and the material labels include "basketball," "pet," "travel," "swimming," "running," wherein "basketball," "swimming," and "running" are target labels.
The specific correspondence between the template tag and the material tag is not limited herein.
In step S284, "determining the segments to be spliced from the material segments corresponding to the target tag" may refer to selecting one or more segments from the material segments corresponding to the target tag as the segments to be spliced.
Referring to fig. 13, in one example, the template tag is "motion", and the determined object tag includes: "play basketball", "swim" and "run". The material segment corresponding to basketball shooting is the material segment L1; a material segment L2 corresponding to "swim"; the material segments corresponding to "running" are the material segment L3 and the material segment L4.
The time difference between the shooting time of the material segment L4 and the generation time of the target video is 100 days, and the time difference between the shooting time of the material segments L1-L3 and the generation time of the target video is less than 90 days. And determining the material segments L1-L3 as the segments to be spliced from the material segments L1-L4. And then generating a preview of the target video L, namely a wonderful story, according to the fragments to be spliced L1-L3.
After the video shooting is completed, theelectronic device 100 may display the icons of "video" and "wonderful story", and the user clicks the icon of "wonderful story", so that theelectronic device 100 plays the material segments L1-L3. If the user feels satisfied, the user can click the "generated" icon, and at this time, theelectronic device 100 clips, splices and stores the material segments L1-L3 from the respective corresponding videos as the target video.
Referring to fig. 14, in some embodiments, the number of material tags is multiple, and step S282 includes:
step S2822: calculating the matching degree of the template label and each material label;
step S2824: and determining the target label according to the matching degree.
Therefore, the target label is determined through the matching degree, and the matching degree of the template label and the material label can be ensured, so that the uniformity of the segment to be spliced is ensured, and the quality of the target video is improved. Specifically, in step S2822, the matching degree of the template tag and each material tag may be calculated through a preset matching degree calculation model. Therefore, each matching degree is calculated by using the same matching degree calculation model, so that the determination of the matching degree has a uniform mode and scale, and the accuracy of the matching degree is favorably ensured.
Additionally, the degree of match may be in the form of a percentage, score, or other form. The specific form of the matching degree is not limited herein.
In step S2824, the material label with the highest matching degree may be used as the target label. Therefore, at least one target label can be ensured to be determined necessarily according to the template label, and therefore the target video is ensured to be generated necessarily. The problem that the target video cannot be generated due to the fact that the material label identical to the template label cannot be found is avoided.
In step S2824, a material tag having a matching degree greater than a preset matching degree threshold may be used as the target tag. Therefore, the matching accuracy is guaranteed, meanwhile, the richness of the target label can be improved, the target video has the uniformity, the content is rich, and the quality of the target video is improved.
Referring to fig. 15, in some embodiments, step S284 includes:
step S2841: and all the material segments corresponding to the target label are taken as segments to be spliced.
Therefore, all material segments corresponding to the template labels can be presented in the target video, so that a user can comprehensively observe the target video, the comprehensiveness of the target video is improved, and the improvement of user experience is facilitated.
For example, the template label is "sports", and the material label includes "basketball," "pet," "travel," "swimming," "running," wherein "basketball," "swimming," and "running" are the target labels. And all the material segments corresponding to basketball shooting, swimming and running are taken as segments to be spliced.
Referring to fig. 16, in some embodiments, step S284 includes:
step S2842: and selecting the segments to be spliced from the material segments corresponding to the target label according to preset conditions.
Therefore, further screening can be performed on the material segments corresponding to the target tags, the segments to be spliced are determined from the two dimensions of the tags and the preset conditions, the uniformity of the segments to be spliced can be further improved, and the uniformity of the target video is further improved. In addition, under the condition that the number of material segments corresponding to the target label is large, the number of segments to be spliced can be reduced through further screening, the quality of the target video can be improved, and the aesthetic fatigue of a user caused by overlong duration of the target video can be avoided.
Referring to fig. 17, in some embodiments, the predetermined condition includes a predetermined time threshold, and step S2842 includes:
step S2843: determining the shooting time of the material segment corresponding to the target label;
step S2844: determining a time difference between the shooting time and the generation time of the target video;
step S2845: and taking the material segments with the time difference smaller than the time threshold value as the segments to be spliced.
Therefore, the material segments with the time difference smaller than the time threshold value are used as the segments to be spliced, so that the segments to be spliced in the target video are newer, the selection of the old material segments is avoided, the segments in the target video can be made, the time span is small, the style similarity is high, and the improvement of the quality of the target video is facilitated.
In addition, if the highlight segments with too long shooting time are shot, the quality is high, and the quality of the whole target video is possibly reduced if the highlight segments with the relatively new shooting time are spliced into the target video.
In step S2843, the shooting time of the video corresponding to the material section may be taken as the shooting time of the material section. Thus, the determination of the shooting time of the material section can be simply and conveniently realized.
In addition, the time at which the material section was modified last time may also be taken as the shooting time of the material section. In this manner, temporal updates caused by user modifications to the material segments can be accounted for. It will be appreciated that after a user modifies a segment of material, the modified segment of material is substantially updated. And the user usually only modifies the material segments with more importance. This can infer the user's intention from the side, thereby improving the user experience.
For further explanation and explanation of this section, reference is made to the explanation and explanation of fig. 13 above. To avoid redundancy, it is not described herein.
Referring to fig. 18, in some embodiments, the predetermined condition includes a predetermined character feature, and step S2842 includes:
step S2846: determining character characteristics in the material segments corresponding to the target tags;
step S2847: and taking the material segments with the character characteristics matched with the character characteristics as segments to be spliced.
Therefore, the material segments matched with the character characteristics are used as the segments to be spliced, so that the content in the target video is more uniform, the key content of the target video is more prominent, and the impact and the interestingness of the target video are improved.
Specifically, the preset character characteristics include a preset age range, a preset face proportion threshold, a preset gender, a preset expression, a preset skin color and the like.
Correspondingly, in step S2846, the character characteristics include, but are not limited to, age, face proportion, gender, expression, skin color, and the like. The specific contents of the character features are not limited herein.
Referring to fig. 19, in one example, the template tag is "motion", and the determined object tag includes: "play basketball", "swim" and "run". The material segment corresponding to basketball shooting is the material segment L1; a material segment L2 corresponding to "swim"; the material segments corresponding to "running" are the material segment L3 and the material segment L4.
In the material segment L4, the gender of the person is male, and in the material segments L1-L3, the gender of the person is female. The preset character characteristics are as follows: female, the female is provided with a drug. The story segments for which the person's characteristics match the person's characteristics are story segments L1-L3.
Therefore, the material segments L1-L3 are determined to be the segments to be spliced from the material segments L1-L4. And then generating a preview of the target video L, namely a wonderful story, according to the fragments to be spliced L1-L3.
After the video shooting is completed, theelectronic device 100 may display the icons of "video" and "wonderful story", and the user clicks the icon of "wonderful story", so that theelectronic device 100 plays the material segments L1-L3. If the user feels satisfied, the user can click the "generated" icon, and at this time, theelectronic device 100 clips, splices and stores the material segments L1-L3 from the respective corresponding videos as the target video.
Referring to fig. 20, in some embodiments, step S286 includes:
step S2862: cutting out the segments to be spliced from the video corresponding to the segments to be spliced;
step S2864: and splicing the cut segments to be spliced to obtain the target video.
Therefore, before the target video is generated, the material segments do not need to be cut from the video, the processing efficiency can be improved, the power consumption is saved, and the storage space is saved.
Referring to fig. 19 and 21, after the video shooting is completed, theelectronic device 100 may display icons of "video" and "wonderful story", and the user clicks the icon of "wonderful story", so that theelectronic device 100 plays the material clips L1-L3. At this point, only the preview is played, and no target video is generated.
If the user feels satisfied, the user can click the "generated" icon, and at this time, theelectronic device 100 clips, splices and stores the material segments L1-L3 from the respective corresponding videos as the target video L.
In step S2864, the cut-out to-be-spliced sections may be spliced according to the shooting time of the to-be-spliced sections to obtain the target video. Therefore, the change of the segments to be spliced in time can be reflected, and the quality of the target video is improved. For example, advances in basketball playing technology may be realized.
In addition, the cut segments to be spliced can be spliced according to the length of the segments to be spliced so as to obtain the target video. Therefore, the change of the segment to be spliced in time length can be reflected, and the impact of the target video is improved.
Referring to fig. 22, in some embodiments, a video processing method includes:
step S27: determining the starting time and the ending time of the material segments in the corresponding video;
step S286 includes:
step S2863: and cutting out the segments to be spliced from the video corresponding to the segments to be spliced according to the starting time and the ending time.
Therefore, the cutting of the segments to be spliced can be simply and conveniently realized.
In step S27, the start time and the end time of the material segment in the video can be determined according to the data corresponding to the material segment. For example, in the example of fig. 5, the material segments are corresponding segments of the to-be-processed images I4-I6. The material segment can be determined at the start time of the video according to the to-be-processed image I4, and the material segment can be determined at the end time of the video according to the to-be-processed image I6.
Further, after determining the starting time and the ending time of the material segments in the corresponding videos, the starting time and the ending time can be stored for marking; the start time and the end time may also be marked on the time axis of the video. The specific manner in which the start time and the end time are marked is not limited herein.
In the example of fig. 5, the material tags and material segments are stored in the form of a table. In other examples, the material tags and material segments may be associated in other forms as well. The specific manner in which the material segments are marked is not limited herein.
Additionally, in some embodiments, the digital template includes audio, and step S286 includes:
acquiring audio, wherein the audio has a plurality of rhythm points;
acquiring a plurality of to-be-spliced segments and taking the plurality of to-be-spliced segments as a plurality of video materials;
the audio and the plurality of video materials are processed according to the rhythm points to form a target video, so that the video materials in the target video are switched at least one rhythm point of the audio.
Therefore, when the target video is generated, the audio can be matched with the target video, so that the target video is richer and has higher quality. A video processing method for matching a target video with audio will be described in detail below.
Note that the "video material" is the "segment to be spliced" above. In other words, after step S284, that is, after determining the segment to be spliced from the material segments corresponding to the target tag, the target video may be generated by the following video processing method.
Referring to fig. 23 and 24, in some embodiments, a video processing method includes:
step S12: acquiring audio, wherein the audio has a plurality of rhythm points;
step S14: acquiring a plurality of video materials;
step S16: the audio and the plurality of video materials are processed according to the rhythm points to form a target video, so that the video materials in the target video are switched at least one rhythm point of the audio.
In some embodiments, theprocessor 101 of theelectronic device 100 is configured to obtain audio, the audio having a plurality of tempo points; acquiring a plurality of video materials; the audio and the plurality of video materials are processed according to the rhythm points to form a target video, so that the video materials in the target video are switched at least one rhythm point of the audio.
According to the video processing method, the audio and the video materials are processed according to the rhythm point of the audio to form the target video, so that the video materials in the target video are switched at least one rhythm point of the audio, the video materials can be matched with the rhythm point of the audio under the condition that a user does not need to manually adjust, the expressive force and the impact force of the target video are simply and conveniently improved, and the effect of the target video is better.
It is understood that the related art may clip and splice a plurality of video materials selected by a user according to a preset template to form one video. However, due to the inexperience or aesthetics of users, the editing and splicing of video material is inefficient and less effective. In addition, the preset template can comprise audio, and the user needs to manually adjust the video material to match the video with the audio, so that the operation is complicated.
For example, the related art indicates the rhythm points on the interface, and indicates the length of the video to be spliced, which needs to be filled between every two adjacent rhythm points. The user needs to select the video materials to be filled between every two adjacent rhythm points one by one and manually cut the video materials, so that the finally spliced target video can be matched with the audio. Under the condition that the rhythm points are dense, the operation of the user is complicated. In addition, under the condition that a certain video material is short and is not enough to be placed between two adjacent rhythm points, the video material cannot be used.
In the embodiment, after the user selects a plurality of video materials, the video materials in the target video can be switched at least one rhythm point of the audio without manually cutting and adjusting the video materials, so that the video materials are matched with the rhythm points of the audio. Therefore, the music and the picture of the target video are changed uniformly, the expressive force and the impact force are more prominent, and the effect is better.
In step S12, the audio may be in the target video as background music. The user may select audio in an audio library or by selecting a video template. The video processing method may include: acquiring a video template, wherein the video template comprises audio; processing audio and a plurality of video material according to a tempo point to form a target video, comprising: audio and a plurality of video assets are processed according to the tempo points and the video templates.
In particular, a video template may refer to a series of method policies for processing video material, which policies may be embodied in program code. The video template is used to help the user quickly assemble the assets and generate a target video. The video template may include filters, animated stickers, audio rhythm points, etc. The specific content of the video template is not limited herein. It can be understood that different target videos can be obtained by applying different video templates to the same video material.
The tempo points are key time points derived from the audio fluctuations or the beats of the audio. Specifically, the rhythm point may be marked in the audio in advance, and the rhythm point of the audio may be acquired when the audio is acquired. Of course, the rhythm point may not be marked in the audio in advance, and after the audio is acquired, the audio may be processed to obtain the rhythm point of the audio. The obtained rhythm point can be stored locally in theelectronic device 100, or can be uploaded to a server along with the audio, and other users can download the audio with the rhythm point mark from the server.
In step S14, a plurality of video materials are acquired, which may be a user manually selecting a video material from the plurality of video materials to cause theprocessor 101 to acquire. For example, theelectronic device 100 displays 9 video materials with tags of "basketball", the user selects 3 video materials, and theprocessor 101 processes the audio and the 3 video materials according to the rhythm point to form the target video.
Of course, multiple video materials are acquired, or theprocessor 101 may automatically acquire the video materials. For example, theelectronic device 100 has 10 video materials labeled "basketball", and theprocessor 101 selects 5 video materials from the 10 video materials, and processes the audio and the 5 video materials according to the tempo point to form the target video.
The specific form in which the plurality of video materials are acquired is not limited herein. For convenience of explanation, the following description will be given taking an example in which a user manually selects a video material from a plurality of video materials.
In step S16, the audio and the plurality of video materials are processed according to the rhythm point to form the target video, that is, the plurality of video materials are subjected to video editing. Besides cutting and splicing a plurality of video materials and adding audio, the video editing can also add filters, letters, animation stickers and the like to the plurality of video materials to finally generate a target video.
In addition, "the video material in the target video is switched at least one rhythm point of the audio" means that the target video is switched from a picture of one video material to a picture of the next video material at the rhythm point of the audio.
In the example of fig. 24, the user has selected 3 video materials, which are: video material V1, video material V2 and video material V3. The user clicks "generate" and proceeds to process the 3 video materials. The acquired audio M includes 3 rhythm points, which are a rhythm point P1, a rhythm point P2, and a rhythm point P3, respectively. Theprocessor 101 processes the audio M and video material V1, the video material V2, and the video material V3 according to the rhythm point to form a target video VM.
Specifically, theprocessor 101 clips the video material V1 to obtain a video to be spliced V11 and a waste segment V12, where the start time of the video to be spliced V11 in the audio M is t0, and the end time of the video to be spliced V11 in the audio M is t 1. The time t1 coincides with the tempo point P1. The obsolete clip V12 does not participate in the composition of the target video VM.
The part of the video material V2 cut by theprocessor 101 is empty, and the video material V2 is completely used as the video to be spliced. The video to be spliced V2 is at the start time of the audio M as t1, and the video to be spliced V2 is at the end time of the audio M ast 2. The time t1 coincides with the tempo point P1.
Theprocessor 101 clips the video material V3 to obtain a video to be spliced V31 and a waste segment V32, where the start time of the video to be spliced V31 in the audio M is t2, and the end time of the video to be spliced V31 in the audio M is t 4. The time t4 coincides with the tempo point P3. The obsolete clip V32 does not participate in the composition of the target video VM.
That is, in the target video VM, the video to be spliced V11 derived from the video material V1 and the video to be spliced V2 derived from the video material V2 are switched at the tempo point P1 of the audio M.
Therefore, in the process of playing the target video VM, at the rhythm point P1 of the audio, when the user hears the rhythm change, the user sees the target video VM, the video V11 to be spliced is switched to the video V2 to be spliced, the user is stimulated by the target video VM visually and aurally, the target video VM has stronger impact, the music and the pictures are matched more, and the user experience is better.
In addition, it can be understood that in the target video, the switching of the video materials before and after the rhythm point can be performed at 1 rhythm point of the audio; switching of video material before and after a tempo point may also be done at a plurality of tempo points of the audio, e.g. 2, 3, 4, 6 or other number of tempo points, respectively.
In the example of fig. 24, the target video VM makes switching of video materials before and after the tempo point, that is, the video material V1 and the video material V2 before and after the tempo point P1, at 1 tempo point of the audio, that is, at the tempo point P1.
Referring to fig. 25, the target video VM performs switching of video material before and after a tempo point at 2 tempo points of the audio, that is, at a tempo point P1 and a tempo point P2, respectively. Specifically, at the tempo point P1, switching of the video material V1 and the video material V2 before and after the tempo point P1 is performed. At the tempo point P2, switching of the video material V2 and the video material V3 before and after the tempo point P1 is performed.
Specifically, theprocessor 101 clips the video material V1 to obtain a video to be spliced V11 and a waste segment V12, where the start time of the video to be spliced V11 in the audio M is t0, and the end time of the video to be spliced V11 in the audio M is t 1. The time t1 coincides with the tempo point P1. The obsolete clip V12 does not participate in the composition of the target video VM.
The part of the video material V2 cut by theprocessor 101 is empty, and the video material V2 is completely used as the video to be spliced. The video to be spliced V2 is at the start time of the audio M as t1, and the video to be spliced V2 is at the end time of the audio M ast 2. The time t1 coincides with the tempo point P1. The time t2 coincides with the tempo point P2.
Theprocessor 101 clips the video material V3 to obtain a video to be spliced V31 and a waste segment V32, where the start time of the video to be spliced V31 in the audio M is t2, and the end time of the video to be spliced V31 in the audio M is t 3. The time t2 coincides with the tempo point P2. The time t3 coincides with the tempo point P3. The obsolete clip V32 does not participate in the composition of the target video VM.
Referring to fig. 26, in some embodiments, step S16 includes:
step S162: cutting a plurality of video materials according to the rhythm points to obtain a plurality of videos to be spliced;
step S164: splicing a plurality of videos to be spliced to obtain a target video, wherein the splicing position of at least one video to be spliced is coincided with the rhythm point
Correspondingly, theprocessor 101 is configured to crop a plurality of video materials according to the rhythm point to obtain a plurality of videos to be spliced; and the target video is obtained by splicing the videos to be spliced, wherein the splicing position of at least one video to be spliced is superposed with the rhythm point.
In this way, processing the audio and the plurality of video materials according to the rhythm points to form the target video is realized, so that the video materials in the target video are switched at least one rhythm point of the audio. It will be appreciated that the length of the plurality of video material is often uncertain. Therefore, if a plurality of video materials are directly spliced, the superposition of the splicing position of at least one video to be spliced and the rhythm point cannot be ensured.
According to the video processing method, the plurality of video materials are cut according to the rhythm points to obtain the plurality of videos to be spliced, and the plurality of videos to be spliced are spliced to obtain the target video, so that the situation that the spliced part cannot be overlapped with the rhythm points due to uncertain lengths of the video materials is avoided, and the video materials in the target video can be switched at least one rhythm point of the audio frequency.
Specifically, in step S162, the plurality of video materials and the plurality of videos to be spliced correspond to each other one to one. In other words, a video material is cut according to the rhythm point, and the video to be spliced corresponding to the video material can be obtained. Note that cropping video material includes two cases: in the first case, a video material is cut into a video to be spliced and a waste fragment, the video to be spliced participates in the synthesis of the target video, and the waste fragment does not participate in the synthesis of the target video; in the second case, the part of the video material to be cut is empty, and the video material is completely used as the video to be spliced. The specific form of cropping the video material is not limited herein.
In the example of fig. 24, cropping of the video material V1 and the video material V3 is the first case described above, and cropping of the video material V2 is the second case described above.
In step S164, a plurality of videos to be spliced are spliced to obtain a target video, which may be to cut all video materials to obtain all videos to be spliced, and then splice all videos to be spliced, as shown in fig. 24. Or cutting a part of video materials to obtain videos to be spliced corresponding to the part of video materials, splicing the part of videos to be spliced into a target video, cutting a part of video materials to obtain videos to be spliced corresponding to the part of video materials, and splicing the part of videos to be spliced into the target video. The specific form of cutting and splicing is not limited herein.
Referring to fig. 27, in an example, theprocessor 101 first clips the video material V1 to obtain the video to be spliced V11 and the discarded clip V12, and splices the video to be spliced V11 with the video to be spliced V0. The obsolete clip V12 does not participate in the composition of the target video VM.
Theprocessor 101 then crops the video material V2. Specifically, the portion of the video material V2 that is clipped is empty, and the video material V2 is entirely a video to be spliced. And splicing the video V2 to be spliced into a video spliced by the video V0 to be spliced and the video V11 to be spliced to obtain a target video VM. In the target video, the splicing position of the video to be spliced V11 and the video to be spliced V2 coincides with the rhythm point P1.
Referring to fig. 28, in some embodiments, step S162 includes:
step S1621: determining the starting time of the current video material at the audio frequency;
step S1622: determining the material duration of the current video material;
step S1623: and cutting the current video material according to the starting time, the material duration and the rhythm point to obtain the current video to be spliced.
Correspondingly, theprocessor 101 is configured to determine a start time of the current video material at the audio frequency; the method comprises the steps of determining the material duration of a current video material; and the video splicing module is used for cutting the current video material according to the starting time, the material duration and the rhythm point so as to obtain the current video to be spliced.
Therefore, the method and the device can cut a plurality of video materials according to the rhythm point to obtain a plurality of videos to be spliced. As described above, the tempo point is a key time point derived from the fluctuation of audio or the tempo of audio. And "splicing" means that the end time of one video to be spliced is the start time of the next video to be spliced. Therefore, the current video material is cut according to the starting time, the material duration and the rhythm point, so that the cutting of the video material is proper, and the ending time of the video to be spliced is overlapped with the rhythm point of the audio.
Specifically, the step S1621 includes: under the condition that the current video material is the first video material, taking the starting time of the audio as the starting time of the current video material in the audio; and under the condition that the current video material is not the first video material, taking the ending time of the previous video material of the current video material as the starting time of the current video material in the audio. In this manner, determination of the current video material at the start of the audio is achieved.
In step S1622, a material duration of the current video material may be determined by reading attribute data of the current video material. The attribute data may include the duration, resolution, frame rate, format, etc. of the current video material.
Referring to fig. 29, in some embodiments, step S1623 includes:
step S1624: determining a target rhythm point of the current video material according to the starting moment and the rhythm point;
step S1627: determining the interval duration from the starting moment to the target rhythm point;
step S1628: and cutting the current video material according to the material duration and the interval duration.
Correspondingly, theprocessor 101 is configured to determine a target rhythm point of the current video material according to the start time and the rhythm point; and the interval duration from the starting moment to the target rhythm point is determined; and the video editing device is used for cutting the current video material according to the material duration and the interval duration.
Therefore, the current video material is cut according to the starting time, the material duration and the rhythm point, so that the current video to be spliced is obtained. Note that the target tempo point is a tempo point at which switching of two adjacent videos to be spliced needs to occur. In other words, at the target tempo point, one video to be stitched ends and the next video to be stitched starts.
It can be understood that the current video material is cut according to the material duration and the interval duration from the starting time to the target rhythm point, so that the ending time of the cut video to be spliced can coincide with the target rhythm point, and the splicing position of the video to be spliced and the next video to be spliced of the video to be spliced can coincide with the target rhythm point. Therefore, the switching of the video to be spliced can be simply and conveniently realized at the target rhythm point.
Specifically, in step S1627, the start time may be subtracted from the time of the target rhythm point to determine the interval duration from the start time to the target rhythm point. As mentioned before, the tempo point is a key time point. Therefore, the interval duration can be obtained by directly subtracting the starting time from the target rhythm point. Therefore, the determination of the interval duration can be simply and conveniently realized, the processing time is saved, and the processing efficiency is improved.
In the example of fig. 24, the start time of the video material V1 in the audio M is t0, the target tempo point is a tempo point P1, and the interval duration from the start time t0 to the target tempo point P1 is: t1-t 0. And the material duration of the video material V1 is greater than the interval duration. Therefore, the video material V1 is clipped to obtain the video to be spliced V11 and the discarded segment V12, so that the duration of the video to be spliced V11 is equal to the interval duration from the start time t0 to the target tempo point P1, and thus the video to be spliced V11 coincides with the target tempo point P1 at the end time of the audio M, which is t 1.
Referring to fig. 30, in some embodiments, step S1624 includes:
step S1625: under the condition that the starting time is coincident with the rhythm point, taking a first rhythm point after the starting time as a target rhythm point;
step S1626: and under the condition that the starting time is not coincident with each rhythm point, taking a second rhythm point after the starting time as a target rhythm point.
Correspondingly, theprocessor 101 is configured to, in a case where the start time coincides with the rhythm point, take a first rhythm point after the start time as a target rhythm point; and the second rhythm point after the starting moment is used as the target rhythm point under the condition that the starting moment is not coincident with each rhythm point.
Therefore, the target rhythm point of the current video material is determined according to the starting time and the rhythm point. It can be understood that, in the case that the starting time of the current video material coincides with the rhythm point, at the rhythm point, the current video material is switched with the previous video to be spliced. And taking the first rhythm point after the starting moment as a target rhythm point of the current video material, and cutting the current video material to ensure that the ending moment of the current video to be spliced is superposed with the target rhythm point. Therefore, the current video to be spliced and the next video to be spliced can be switched at the target rhythm point, and the impact force of the target video can be further improved.
Under the condition that the starting time is not coincident with each rhythm point, the fact that the previous video to be spliced of the current video material is not finished at the rhythm point can be inferred. That is, the material duration of the previous video material is less than the interval duration of the previous video material from the start time of the audio to the target node, and the previous video material is not sufficient to reach the target rhythm point. At this time, the second rhythm point after the starting time can be used as the target rhythm point, and the switching of the video material can be realized at the second rhythm point after the starting time by using the current video material while the previous video material is used in the target video. The previous video material cannot be spliced due to the fact that the duration of the previous video material is insufficient. Therefore, the requirement on the duration of the video materials can be reduced, and each video material is utilized, so that the user experience is improved.
In the example of fig. 24, the start time t1 of the video material V2 coincides with the tempo point P1, and therefore the first tempo point after the start time t1, that is, the tempo point P2, is the target tempo point of the video material V2.
The start time t2 of the video material V3 does not coincide with the tempo point P1, the tempo point P2 and the tempo point P3, so that the second tempo point after the start time t2, that is, the tempo point P4, is the target tempo point of the video material V3.
Thus, even if the material duration of the video material V2 is less than the interval duration t3-t1, the video material V2 can participate in the target video VM without rendering the video material V2 unusable. In addition, even if the video material V2 ends before the tempo point P2 and cannot match the tempo point P2, the video V31 to be spliced is caused to coincide with the tempo point P3 at the end time of the audio M by the video material V3. It is understood that the next video to be spliced can be spliced after the video to be spliced V31, so that the switching of the video to be spliced can be matched with the rhythm point again.
Referring to fig. 31, in some embodiments, step S1628 includes:
step S1629: and under the condition that the material duration is greater than the interval duration, cutting the current video material so as to enable the current video to be spliced to be overlapped with the target rhythm point at the audio end time.
Correspondingly, theprocessor 101 is configured to, in a case that the material duration is greater than the interval duration, crop the current video material so that the current video to be stitched coincides with the target rhythm point at the end time of the audio.
Therefore, the current video material is cut according to the material time length and the interval time length. It can be understood that, in the case that the material duration is longer than the interval duration, it can be inferred that the current video material is enough to fill the target rhythm point, and if the material is not cut, the target rhythm point will be missed, so that the switching between the current video to be spliced and the next video to be spliced cannot occur at the target rhythm point. Therefore, the redundant part can be cut off, so that the current video to be spliced is superposed with the target rhythm point at the end time of the audio, and the switching between the current video to be spliced and the next video to be spliced can be simply and conveniently carried out at the target rhythm point.
In the example of fig. 24, the video material V1 has a material duration that is greater than the interval duration t1-t0, and thus, the video material V1 can be cropped to a video to be spliced V11 and a discarded clip V12. In this way, the video to be spliced V11 coincides with the target tempo point P1 at the end time of the audio M, which is t1, so that the video to be spliced V11 and the video to be spliced V2 switch at the target tempo point P1.
Referring to fig. 32, in some embodiments, step S1628 includes:
step S162 a: and under the condition that the material duration is less than or equal to the interval duration, taking the ending time of the current video material in the audio as the starting time of the next video material in the audio.
Correspondingly, theprocessor 101 is configured to, in a case where the material duration is less than or equal to the interval duration, take the ending time of the current video material in the audio as the starting time of the next video material in the audio.
Therefore, the current video material is cut according to the material time length and the interval time length. It will be appreciated that where the material duration is less than or equal to the interval duration, the current video material is insufficient to reach its target tempo point. At this time, the ending time of the current video material in the audio is used as the starting time of the next video material in the audio, and the switching of the video to be spliced can be realized at the second rhythm point after the starting time by using the next video material while the current video material is used in the target video. The current video material cannot be spliced due to the fact that the duration of the current video material is insufficient. Therefore, the requirement on the duration of the video materials can be reduced, and each video material is utilized, so that the user experience is improved.
In the example of fig. 24, the material duration of the video material V2 is less than the interval duration t3-t1, and the video material V2 is taken as the next video material at the end time t2 of the audio M, that is, the video material V3, at the start time of the audio M. In this way, even if the material duration of the video material V2 is less than the interval duration, the video material V2 can participate in the target video VM without rendering the video material V2 unusable. In addition, even if the video material V2 ends before the tempo point P2 and cannot match the tempo point P2, the video V31 to be spliced is caused to coincide with the tempo point P3 at the end time of the audio M by the video material V3. It is understood that the next video to be spliced can be spliced after the video to be spliced V31, so that the switching of the video to be spliced can be matched with the rhythm point again.
Referring to fig. 33, anelectronic device 100 is provided in an embodiment of the present disclosure. Theelectronic device 100 comprises amemory 103 and aprocessor 101, wherein theprocessor 101 is connected to thememory 103, and theprocessor 101 is configured to execute the video processing method according to any of the above embodiments.
For example, performing: step S24: identifying material segments meeting preset conditions in the video; step S26: determining a material label corresponding to the material segment; and step S28: and splicing the material segments according to the material labels and a preset digital template to generate the target video.
Theelectronic device 100 of the embodiment of the application can identify the material segments of the video by setting the predetermined conditions, and can identify the parts which are more wonderful or can arouse people's interest in the video as the material segments, so that the effect of the target video generated by splicing the material segments is better.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one ormore processors 101, cause theprocessors 101 to perform the video processing method of any of the embodiments described above.
For example, performing: step S24: identifying material segments meeting preset conditions in the video; step S26: determining a material label corresponding to the material segment; and step S28: and splicing the material segments according to the material labels and a preset digital template to generate the target video.
According to the computer-readable storage medium, the material segments of the video are identified by setting the preset conditions, and the parts which are more wonderful or can arouse people's interest in the video can be identified as the material segments, so that the target video generated by splicing the material segments has a better effect.
FIG. 33 is a block diagram of theelectronic device 100 according to an embodiment. Theelectronic device 100 includes aprocessor 101, a memory 102 (e.g., a non-volatile storage medium), aninternal memory 103, a display device 104, and aninput device 105 connected by asystem bus 110. Thememory 102 of theelectronic device 100 stores an operating system and computer-readable instructions, among other things. The computer readable instructions can be executed by theprocessor 101 to implement the video processing method of any one of the above embodiments.
Theprocessor 101 may be used to provide computing and control capabilities, supporting the operation of the entireelectronic device 100. Theinternal memory 103 of theelectronic device 100 provides an environment for the execution of computer-readable instructions in thememory 102. Theinput device 105 may be a key, a trackball, or a touch pad provided on the housing of theelectronic device 100, or may be an external keyboard, a touch pad, or a mouse.
It will be appreciated by those skilled in the art that the configurations shown in the figures are merely schematic representations of portions of configurations relevant to the present disclosure, and do not constitute limitations on the electronic devices to which the present disclosure may be applied, and that a particular electronic device may include more or fewer components than shown in the figures, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, and the program may be stored in a non-volatile computer readable storage medium, and when executed, may include the processes of the embodiments of the methods as described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only memory (ROM), or the like.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (17)

1. A video processing method, characterized in that the video processing method comprises:
identifying material segments meeting preset conditions in the video;
determining a material label corresponding to the material segment; and
splicing the material segments according to the material labels and a preset digital template to generate a target video;
the method for identifying the material segments meeting the preset conditions in the video comprises the following steps:
determining a target frame image in the video;
and taking the segments of which the number proportion of the target frame images in the video is greater than a preset proportion threshold value as the material segments, wherein the first frame and the last frame of the material segments are the target frame images.
2. The video processing method of claim 1, wherein identifying material segments in the video that satisfy a predetermined condition comprises:
identifying the material segments in the video that meet a predetermined condition at the time of the video capture.
3. The video processing method according to claim 1, wherein the target frame image is an image with a sharpness greater than a preset sharpness threshold in the video.
4. The video processing method according to claim 1, wherein the video processing method comprises:
determining a tag of a video segment in the video, wherein the video segment comprises the material segment;
determining a material label corresponding to the material segment, including:
and taking the label of the video segment corresponding to the material segment as the material label.
5. The video processing method of claim 4, wherein determining the label of a video segment in the video comprises:
determining the video segment in the video;
acquiring a preset label set, wherein the label set comprises a plurality of labels to be selected;
and taking the tag to be selected matched with the video segment in the tag set as the tag of the video segment.
6. The video processing method of claim 4, wherein identifying material segments in the video that satisfy a predetermined condition comprises:
when the label of the video segment is a motion label, determining a target action of the video segment according to the label of the video segment;
and taking the segment comprising the target action in the video segment as the material segment.
7. The video processing method according to claim 1, wherein the digital template comprises a template tag, and the splicing of the material segment according to the material tag and a preset digital template to generate the target video comprises:
taking the material label corresponding to the template label as a target label;
determining fragments to be spliced from the material fragments corresponding to the target labels;
and generating the target video according to the segment to be spliced.
8. The video processing method according to claim 7, wherein the material tag is plural in number, and wherein the step of setting the material tag corresponding to the template tag as a target tag includes:
calculating the matching degree of the template label and each material label;
and determining the target label according to the matching degree.
9. The video processing method according to claim 7, wherein determining a segment to be spliced from the material segments corresponding to the target tag comprises:
and taking all the material segments corresponding to the target label as the segments to be spliced.
10. The video processing method according to claim 7, wherein determining a segment to be spliced from the material segments corresponding to the target tag comprises:
and selecting the segments to be spliced from the material segments corresponding to the target label according to preset conditions.
11. The video processing method according to claim 10, wherein the preset condition includes a preset time threshold, and selecting the segment to be spliced from the material segments corresponding to the target tag according to the preset condition comprises:
determining shooting time of the material segment corresponding to the target label;
determining a time difference between the shooting time and the generation time of the target video;
and taking the material segments with the time difference smaller than the time threshold value as the segments to be spliced.
12. The video processing method of claim 10, wherein the preset conditions include preset human characteristics, and selecting the segment to be spliced from the material segments corresponding to the target tag according to the preset conditions comprises:
determining character characteristics in the material segments corresponding to the target tags;
and taking the material segments with the character characteristics matched with the character characteristics as the segments to be spliced.
13. The video processing method according to claim 7, wherein generating the target video according to the segment to be spliced comprises:
cutting out the segments to be spliced from the video corresponding to the segments to be spliced;
splicing the cut segments to be spliced to obtain the target video.
14. The video processing method according to claim 13, wherein the video processing method comprises:
determining the starting time and the ending time of the material segments in the corresponding video;
cutting out the to-be-spliced segments from the video corresponding to the to-be-spliced segments, wherein the cutting out of the to-be-spliced segments comprises the following steps:
and cutting the to-be-spliced segments from the video corresponding to the to-be-spliced segments according to the starting time and the ending time.
15. The video processing method according to claim 7, wherein the digital template comprises audio, and the generating the target video according to the segment to be spliced comprises:
acquiring the audio, wherein the audio has a plurality of rhythm points;
acquiring a plurality of to-be-spliced segments and taking the plurality of to-be-spliced segments as a plurality of video materials;
processing the audio and the plurality of video material according to the rhythm points to form a target video, so that the video material in the target video is switched at least one rhythm point of the audio.
16. An electronic device, comprising a memory and a processor, the processor being coupled to the memory, the processor being configured to perform the video processing method of any of claims 1-15.
17. A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the video processing method of any of claims 1-15.
CN201911172687.6A2019-11-262019-11-26 Video processing method, electronic device and storage mediumActiveCN110855904B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201911172687.6ACN110855904B (en)2019-11-262019-11-26 Video processing method, electronic device and storage medium
PCT/CN2020/131076WO2021104242A1 (en)2019-11-262020-11-24Video processing method, electronic device, and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911172687.6ACN110855904B (en)2019-11-262019-11-26 Video processing method, electronic device and storage medium

Publications (2)

Publication NumberPublication Date
CN110855904A CN110855904A (en)2020-02-28
CN110855904Btrue CN110855904B (en)2021-10-01

Family

ID=69604600

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911172687.6AActiveCN110855904B (en)2019-11-262019-11-26 Video processing method, electronic device and storage medium

Country Status (2)

CountryLink
CN (1)CN110855904B (en)
WO (1)WO2021104242A1 (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110519638B (en)*2019-09-062023-05-16Oppo广东移动通信有限公司Processing method, processing device, electronic device, and storage medium
CN110855904B (en)*2019-11-262021-10-01Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium
CN111246289A (en)*2020-03-092020-06-05Oppo广东移动通信有限公司 Video generation method and device, electronic device, storage medium
CN111432138B (en)*2020-03-162022-04-26Oppo广东移动通信有限公司 Video stitching method and apparatus, computer readable medium and electronic device
CN111491213B (en)*2020-04-172022-03-08维沃移动通信有限公司Video processing method, video processing device and electronic equipment
CN111400553A (en)*2020-04-262020-07-10Oppo广东移动通信有限公司Video searching method, video searching device and terminal equipment
CN113838490B (en)*2020-06-242022-11-11华为技术有限公司Video synthesis method and device, electronic equipment and storage medium
CN111901608B (en)*2020-06-282023-04-07上海文化广播影视集团有限公司Video program film examination system and method
CN111756953A (en)*2020-07-142020-10-09北京字节跳动网络技术有限公司Video processing method, device, equipment and computer readable medium
CN111901629A (en)*2020-09-072020-11-06三星电子(中国)研发中心Method and device for generating and playing video stream
CN113841417B (en)*2020-09-272023-07-28深圳市大疆创新科技有限公司Film generation method, terminal device, shooting device and film generation system
CN114390352A (en)*2020-10-162022-04-22上海哔哩哔哩科技有限公司Audio and video processing method and device
WO2022104637A1 (en)*2020-11-192022-05-27深圳市大疆创新科技有限公司Video editing apparatus and method, movable platform, gimbal, and hardware device
CN112565825B (en)*2020-12-022022-05-13腾讯科技(深圳)有限公司Video data processing method, device, equipment and medium
CN112541094A (en)*2020-12-212021-03-23深圳市前海手绘科技文化有限公司Video material recording method and device for animation video platform
CN114697700B (en)*2020-12-282024-07-16北京小米移动软件有限公司Video editing method, video editing device and storage medium
CN114866788A (en)*2021-02-032022-08-05阿里巴巴集团控股有限公司Video processing method and device
CN113115106B (en)*2021-03-312023-05-05影石创新科技股份有限公司Automatic editing method, device, terminal and storage medium for panoramic video
CN115442539B (en)*2021-06-042023-11-07北京字跳网络技术有限公司 A video editing method, device, equipment and storage medium
CN115525780A (en)*2021-06-242022-12-27北京字跳网络技术有限公司 A template recommendation method, device, equipment and storage medium
CN113457135B (en)*2021-06-292024-08-23网易(杭州)网络有限公司Display control method and device in game and electronic equipment
CN113613061B (en)*2021-07-062023-03-21北京达佳互联信息技术有限公司Checkpoint template generation method, checkpoint template generation device, checkpoint template generation equipment and storage medium
CN113365147B (en)*2021-08-112021-11-19腾讯科技(深圳)有限公司Video editing method, device, equipment and storage medium based on music card point
CN113784058A (en)*2021-09-092021-12-10上海来日梦信息科技有限公司 An image generation method, device, storage medium and electronic device
CN113794930B (en)*2021-09-102023-11-24中国联合网络通信集团有限公司Video generation method, device, equipment and storage medium
CN113901263B (en)*2021-09-302022-08-19宿迁硅基智能科技有限公司Label generation method and device for video material
CN113949828B (en)*2021-10-182024-04-30北京达佳互联信息技术有限公司Video editing method, device, electronic equipment and storage medium
CN114268848B (en)*2021-12-172025-06-10北京达佳互联信息技术有限公司 Video generation method, device, electronic device and storage medium
CN114463673B (en)*2021-12-312023-04-07深圳市东信时代信息技术有限公司Material recommendation method, device, equipment and storage medium
CN114422848A (en)*2022-01-192022-04-29腾讯科技(深圳)有限公司Video segmentation method and device, electronic equipment and storage medium
CN118055290A (en)*2022-05-302024-05-17荣耀终端有限公司Multi-track video editing method, graphical user interface and electronic equipment
CN115086783B (en)*2022-06-282023-10-27北京奇艺世纪科技有限公司Video generation method and device and electronic equipment
CN115050077B (en)*2022-06-302025-05-16浪潮电子信息产业股份有限公司 Emotion recognition method, device, equipment and storage medium
CN116366879A (en)*2022-11-302023-06-30联想(北京)有限公司Video export method, device, equipment and storage medium
CN116095251A (en)*2022-12-232023-05-09深圳市闪剪智能科技有限公司 Method, device, equipment and storage medium for generating advertisement title

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB201216420D0 (en)*2012-09-142012-10-31Canon KkMethod and device for generating a description file, and corresponding streaming method
CN103475935A (en)*2013-09-062013-12-25北京锐安科技有限公司Method and device for retrieving video segments
CN107609104A (en)*2017-09-122018-01-19中广热点云科技有限公司The method and system of associated video is searched according to video image material
CN108769731A (en)*2018-05-252018-11-06北京奇艺世纪科技有限公司The method, apparatus and electronic equipment of target video segment in a kind of detection video
CN108924586A (en)*2018-06-202018-11-30北京奇艺世纪科技有限公司A kind of detection method of video frame, device and electronic equipment
CN109089128A (en)*2018-07-102018-12-25武汉斗鱼网络科技有限公司A kind of method for processing video frequency, device, equipment and medium
CN110009659A (en)*2019-04-122019-07-12武汉大学 A human video clip extraction method based on multi-target motion tracking

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7657836B2 (en)*2002-07-252010-02-02Sharp Laboratories Of America, Inc.Summarization of soccer video content
EP3998610A1 (en)*2015-09-302022-05-18Apple Inc.Synchronizing audio and video components of an automatically generated audio/video presentation
CN107124624B (en)*2017-04-212022-09-23腾讯科技(深圳)有限公司Method and device for generating video data
CN110121105B (en)*2018-02-062022-04-29阿里巴巴(中国)有限公司Clip video generation method and device
CN108632641A (en)*2018-05-042018-10-09百度在线网络技术(北京)有限公司Method for processing video frequency and device
CN109117777B (en)*2018-08-032022-07-01百度在线网络技术(北京)有限公司Method and device for generating information
CN109168084B (en)*2018-10-242021-04-23麒麟合盛网络技术股份有限公司Video editing method and device
CN110166827B (en)*2018-11-272022-09-13深圳市腾讯信息技术有限公司Video clip determination method and device, storage medium and electronic device
CN109618184A (en)*2018-12-292019-04-12北京市商汤科技开发有限公司Method for processing video frequency and device, electronic equipment and storage medium
CN110087123B (en)*2019-05-152022-07-22腾讯科技(深圳)有限公司Video file production method, device, equipment and readable storage medium
CN110139159B (en)*2019-06-212021-04-06上海摩象网络科技有限公司Video material processing method and device and storage medium
CN110855904B (en)*2019-11-262021-10-01Oppo广东移动通信有限公司 Video processing method, electronic device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB201216420D0 (en)*2012-09-142012-10-31Canon KkMethod and device for generating a description file, and corresponding streaming method
CN103475935A (en)*2013-09-062013-12-25北京锐安科技有限公司Method and device for retrieving video segments
CN107609104A (en)*2017-09-122018-01-19中广热点云科技有限公司The method and system of associated video is searched according to video image material
CN108769731A (en)*2018-05-252018-11-06北京奇艺世纪科技有限公司The method, apparatus and electronic equipment of target video segment in a kind of detection video
CN108924586A (en)*2018-06-202018-11-30北京奇艺世纪科技有限公司A kind of detection method of video frame, device and electronic equipment
CN109089128A (en)*2018-07-102018-12-25武汉斗鱼网络科技有限公司A kind of method for processing video frequency, device, equipment and medium
CN110009659A (en)*2019-04-122019-07-12武汉大学 A human video clip extraction method based on multi-target motion tracking

Also Published As

Publication numberPublication date
CN110855904A (en)2020-02-28
WO2021104242A1 (en)2021-06-03

Similar Documents

PublicationPublication DateTitle
CN110855904B (en) Video processing method, electronic device and storage medium
CN109729426B (en)Method and device for generating video cover image
CN110139159A (en)Processing method, device and the storage medium of video material
CN112740713B (en)Method for providing key time in multimedia content and electronic device thereof
WO2019157977A1 (en)Method for labeling performance segment, video playing method and device, and terminal
US20180121733A1 (en)Reducing computational overhead via predictions of subjective quality of automated image sequence processing
CN111930994A (en)Video editing processing method and device, electronic equipment and storage medium
US11653072B2 (en)Method and system for generating interactive media content
CN105721765A (en)IMAGE Generation device and image generation method
CN112839191B (en) Dynamic image processing device, dynamic image processing method, and recording medium
US20230061761A1 (en)Synthetic emotion in continuously generated voice-to-video system
CN113806570A (en)Image generation method and generation device, electronic device and storage medium
CN114222077A (en)Video processing method and device, storage medium and electronic equipment
CN110769279A (en)Video processing method and device
CN112218159B (en) Multimedia information playback method, device, storage medium and electronic device
CN116170626A (en)Video editing method, device, electronic equipment and storage medium
CN112929743A (en)Method and device for adding video special effect to specified object in video and mobile terminal
CN119169157B (en) A method, device, equipment and storage medium for generating virtual human video
CN111954022A (en)Video playing method and device, electronic equipment and readable storage medium
CN115917647B (en) Automatic non-linear editing style transfer
CN115315960A (en)Content correction device, content distribution server, content correction method, and recording medium
CN118573966A (en)Virtual object control method and device, electronic equipment and storage medium
US20080122867A1 (en)Method for displaying expressional image
Prado et al.360RAT: A tool for annotating regions of interest in 360-degree videos
CN113747239B (en)Video editing method and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20240611

Address after:Room A039, Room 801, No. 190 Kaitai Avenue, Huangpu District, Guangzhou City, Guangdong Province, 510700

Patentee after:Guangzhou Xinguang Enterprise Management Consulting Co.,Ltd.

Country or region after:China

Address before:Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Patentee before:GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Country or region before:China

TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20240819

Address after:Room 2001, Building B, Tangfeng International Plaza, No. 18 Fenghui South Road, High tech Zone, Xi'an City, Shaanxi Province 710000

Patentee after:Xi'an Xijing Information Technology Co.,Ltd.

Country or region after:China

Address before:Room A039, Room 801, No. 190 Kaitai Avenue, Huangpu District, Guangzhou City, Guangdong Province, 510700

Patentee before:Guangzhou Xinguang Enterprise Management Consulting Co.,Ltd.

Country or region before:China

TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20240925

Address after:710000 room 2004, building 1, No. 18, Fenghui South Road, high tech Zone, Xi'an City, Shaanxi Province

Patentee after:Zhongheng Ruichen Technology Co.,Ltd.

Country or region after:China

Address before:Room 2001, Building B, Tangfeng International Plaza, No. 18 Fenghui South Road, High tech Zone, Xi'an City, Shaanxi Province 710000

Patentee before:Xi'an Xijing Information Technology Co.,Ltd.

Country or region before:China


[8]ページ先頭

©2009-2025 Movatter.jp