Movatterモバイル変換


[0]ホーム

URL:


CN110381371A - A kind of video clipping method and electronic equipment - Google Patents

A kind of video clipping method and electronic equipment
Download PDF

Info

Publication number
CN110381371A
CN110381371ACN201910696203.1ACN201910696203ACN110381371ACN 110381371 ACN110381371 ACN 110381371ACN 201910696203 ACN201910696203 ACN 201910696203ACN 110381371 ACN110381371 ACN 110381371A
Authority
CN
China
Prior art keywords
video
input
target
style
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910696203.1A
Other languages
Chinese (zh)
Other versions
CN110381371B (en
Inventor
龚烜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weiwo Software Technology Co ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co LtdfiledCriticalVivo Mobile Communication Co Ltd
Priority to CN201910696203.1ApriorityCriticalpatent/CN110381371B/en
Publication of CN110381371ApublicationCriticalpatent/CN110381371A/en
Application grantedgrantedCritical
Publication of CN110381371BpublicationCriticalpatent/CN110381371B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种视频剪辑方法及电子设备,该方法包括:显示待剪辑的第一视频的内容标签集合和风格标签集合,其中,内容标签集合包括至少一个内容标签,每个内容标签对应第一视频中的至少一个视频片段,风格标签集合包括至少一个风格标签,每个风格标签对应至少一个素材组合,每个素材组合中包括至少一个剪辑素材;接收用户对内容标签集合的第一输入;响应于第一输入,从第一视频中截取第一输入所选取的目标内容标签对应的目标视频片段;接收用户对风格标签集合的第二输入;响应于第二输入,获取第二输入所选取的目标风格标签对应的目标素材组合;将目标视频片段和目标素材组合中的剪辑素材进行合成,生成第二视频。

The invention discloses a video editing method and electronic device. The method includes: displaying a content label set and a style label set of a first video to be edited, wherein the content label set includes at least one content label, and each content label corresponds to a first video to be edited. At least one video clip in a video, the style tag set includes at least one style tag, each style tag corresponds to at least one material combination, and each material combination includes at least one clip material; receiving a user's first input to the content tag set; In response to the first input, intercept the target video segment corresponding to the target content tag selected by the first input from the first video; receive the user's second input on the style tag set; in response to the second input, obtain the second input selected. The target material combination corresponding to the target style tag of the .

Description

Translated fromChinese
一种视频剪辑方法及电子设备A video editing method and electronic device

技术领域technical field

本发明实施例涉及图像处理技术领域,尤其涉及一种视频剪辑方法及电子设备。Embodiments of the present invention relate to the technical field of image processing, and in particular, to a video editing method and an electronic device.

背景技术Background technique

视频剪辑技术是一种通过剪辑方式对视频中的视频片段进行处理,生成具有不同表现力的视频作品的技术,常应用于短视频制作、视频集锦等场景。Video editing technology is a technology that processes video clips in a video by editing to generate video works with different expressiveness. It is often used in short video production, video collection and other scenarios.

现有技术中,视频剪辑主要采用人工剪辑方式,在进行视频剪辑时,需要用户花费大量的时间进行视频速度、长度的对轨调整,转场效果的筛选,以及音频节奏的匹配等处理,操作比较繁琐,视频剪辑效率较低。In the prior art, video editing mainly adopts the manual editing method. When performing video editing, the user needs to spend a lot of time to adjust the speed and length of the video, screen the transition effect, and match the audio rhythm. It is cumbersome and the video editing efficiency is low.

发明内容SUMMARY OF THE INVENTION

本发明实施例提供一种视频剪辑方法及电子设备,以解决现有技术中存在的视频剪辑效率较低的技术问题。Embodiments of the present invention provide a video editing method and electronic device to solve the technical problem of low video editing efficiency in the prior art.

为解决上述技术问题,本发明实施例是这样实现的:In order to solve the above-mentioned technical problems, the embodiments of the present invention are implemented as follows:

第一方面,本发明实施例提供了一种视频剪辑方法,所述方法包括:In a first aspect, an embodiment of the present invention provides a video editing method, the method includes:

显示待剪辑的第一视频的内容标签集合和风格标签集合,其中,所述内容标签集合包括至少一个内容标签,每个内容标签对应所述第一视频中的至少一个视频片段,所述风格标签集合包括至少一个风格标签,每个风格标签对应至少一个素材组合,每个素材组合中包括至少一个剪辑素材;Displaying a content tag set and a style tag set of the first video to be edited, wherein the content tag set includes at least one content tag, each content tag corresponds to at least one video clip in the first video, and the style tag The collection includes at least one style tag, each style tag corresponds to at least one material combination, and each material combination includes at least one clip material;

接收用户对所述内容标签集合的第一输入;receiving a first input from the user for the set of content tags;

响应于所述第一输入,从所述第一视频中截取所述第一输入所选取的目标内容标签对应的目标视频片段;In response to the first input, intercepting the target video segment corresponding to the target content tag selected by the first input from the first video;

接收用户对所述风格标签集合的第二输入;receiving a second input from the user for the set of style tags;

响应于所述第二输入,获取所述第二输入所选取的目标风格标签对应的目标素材组合;In response to the second input, acquiring the target material combination corresponding to the target style tag selected by the second input;

将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。The target video segment and the clip material in the target material combination are synthesized to generate a second video.

可选地,作为一个实施例,所述显示待剪辑的第一视频的内容标签集合和风格标签集合之前,还包括:Optionally, as an embodiment, before displaying the content tag set and style tag set of the first video to be edited, the method further includes:

对所述第一视频中每个视频帧进行分类,得到至少一个视频片段,每个视频片段内视频帧的类别相同;Classifying each video frame in the first video to obtain at least one video segment, and the category of the video frame in each video segment is the same;

提取每个视频片段的字幕片段;Extract subtitle segments for each video segment;

根据字幕片段中每个词的出现频率,提取每个字幕片段的至少一个关键词;Extract at least one keyword of each subtitle segment according to the frequency of occurrence of each word in the subtitle segment;

将每个字幕片段的所述至少一个关键词确定为对应视频片段的内容标签。The at least one keyword of each subtitle segment is determined as the content tag of the corresponding video segment.

可选地,作为一个实施例,所述显示待剪辑的第一视频的内容标签集合和风格标签集合之前,还包括:Optionally, as an embodiment, before displaying the content tag set and style tag set of the first video to be edited, the method further includes:

获取预设数量的视频剪辑样本,其中,所述视频剪辑样本为经过视频剪辑处理的视频,每个视频剪辑样本中包含至少一个剪辑素材;Acquiring a preset number of video clip samples, wherein the video clip samples are videos processed by video clips, and each video clip sample includes at least one clip material;

提取每个视频剪辑样本中每个剪辑素材的至少一个素材特征,所述素材特征用于标识剪辑素材;extracting at least one material feature of each clip material in each video clip sample, where the material feature is used to identify the clip material;

获取每个视频剪辑样本的风格标签;Get style tags for each video clip sample;

将提取到的素材特征进行组合,并映射到对应的风格标签,得到每个风格标签对应的素材组合。The extracted material features are combined and mapped to corresponding style tags to obtain the material combination corresponding to each style tag.

可选地,作为一个实施例,所述将提取到的素材特征进行组合,并映射到对应的风格标签,得到每个风格标签对应的素材组合,包括:Optionally, as an embodiment, the extracted material features are combined and mapped to corresponding style tags to obtain a material combination corresponding to each style tag, including:

对于获取到的每个风格标签Pi下的N类素材特征,统计每类素材特征中每个素材特征的使用次数;For the acquired N types of material features under each style tag Pi, count the usage times of each material feature in each type of material feature;

确定每类素材特征中使用次数排在前M位的素材特征;Determine the top M material features in each type of material features;

将N类素材特征下的排在前M位的素材特征进行组合,得到MN个素材特征集合;Combining the top M material features under the N types of material features to obtain MN material feature sets;

计算每个素材特征集合的素材相关度,并将素材相关度排在前S位的素材特征集合映射到所述风格标签Pi,得到所述风格标签Pi对应的素材组合;Calculate the material relevancy of each material feature set, and map the material feature set whose material relevancy is ranked in the top S position to the style tag Pi to obtain the material combination corresponding to the style tag Pi;

其中,所述风格标签Pi为获取到的风格标签中的第i个风格标签,每个素材特征集合中包括N个素材特征、且各素材特征的类型不同,素材相关度为素材特征集合中所有素材特征的相关度,N、M和S均为大于1的整数。Wherein, the style tag Pi is the i-th style tag in the obtained style tags, each material feature set includes N material features, and the types of each material feature are different, and the material relevance is all the material features in the material feature set. The correlation degree of material features, N, M and S are all integers greater than 1.

可选地,作为一个实施例,所述将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频之后,还包括:Optionally, as an embodiment, after synthesizing the target video segment and the clip material in the target material combination to generate the second video, the method further includes:

接收用户对所述第二视频的第三输入;receiving a third user input on the second video;

响应于所述第三输入,为所述第二视频添加所述第三输入所输入的风格标签;in response to the third input, adding the style tag entered by the third input to the second video;

将所述第二视频确定为所述预设数量的视频剪辑样本之一。The second video is determined to be one of the preset number of video clip samples.

可选地,作为一个实施例,所述剪辑素材包括以下至少一项:音频素材、滤镜、转场效果、特效、字幕风格和镜头切换频率。Optionally, as an embodiment, the editing material includes at least one of the following: audio material, filter, transition effect, special effect, subtitle style, and shot switching frequency.

可选地,作为一个实施例,所述将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频,包括:Optionally, as an embodiment, generating the second video by synthesizing the target video clip and the clip material in the target material combination includes:

在所述目标素材组合中包括音频素材的情况下,提取所述音频素材的音波波动频率;In the case that audio material is included in the target material combination, extracting the sound wave fluctuation frequency of the audio material;

根据所述音波波动频率,生成至少一个视频镜头切换频率方式备选项并显示;According to the sound wave fluctuation frequency, at least one video lens switching frequency mode alternative is generated and displayed;

接收用户对所述至少一个视频镜头切换频率方式备选项中的目标视频镜头切换频率方式备选项的第四输入;receiving a fourth input from the user to the target video shot switching frequency mode alternative in the at least one video shot switching frequency mode alternative;

响应于所述第四输入,根据所述目标视频镜头切换频率方式备选项,将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。In response to the fourth input, the target video clip and the clip material in the target material combination are synthesized according to the target video shot switching frequency mode option to generate a second video.

第二方面,本发明实施例提供了一种电子设备,包括:In a second aspect, an embodiment of the present invention provides an electronic device, including:

显示单元,用于显示待剪辑的第一视频的内容标签集合和风格标签集合,其中,所述内容标签集合包括至少一个内容标签,每个内容标签对应所述第一视频中的至少一个视频片段,所述风格标签集合包括至少一个风格标签,每个风格标签对应至少一个素材组合,每个素材组合中包括至少一个剪辑素材;A display unit, configured to display a set of content tags and a set of style tags of the first video to be edited, wherein the set of content tags includes at least one content tag, and each content tag corresponds to at least one video segment in the first video , the style tag set includes at least one style tag, each style tag corresponds to at least one material combination, and each material combination includes at least one clip material;

第一接收单元,用于接收用户对所述内容标签集合的第一输入;a first receiving unit, configured to receive a user's first input to the content tag set;

截取单元,用于响应于所述第一输入,从所述第一视频中截取所述第一输入所选取的目标内容标签对应的目标视频片段;an intercepting unit, configured to intercept the target video segment corresponding to the target content tag selected by the first input from the first video in response to the first input;

第二接收单元,用于接收用户对所述风格标签集合的第二输入;a second receiving unit, configured to receive a second input from the user to the style tag set;

第一获取单元,用于响应于所述第二输入,获取所述第二输入所选取的目标风格标签对应的目标素材组合;a first obtaining unit, configured to obtain, in response to the second input, the target material combination corresponding to the target style tag selected by the second input;

合成单元,用于将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。A synthesizing unit, configured to synthesize the target video segment and the clip material in the target material combination to generate a second video.

可选地,作为一个实施例,所述电子设备还包括:Optionally, as an embodiment, the electronic device further includes:

分类单元,用于对所述第一视频中每个视频帧进行分类,得到至少一个视频片段,每个视频片段内视频帧的类别相同;A classification unit, configured to classify each video frame in the first video, to obtain at least one video segment, and the category of the video frame in each video segment is the same;

第一提取单元,用于提取每个视频片段的字幕片段;The first extraction unit, for extracting the subtitle segment of each video segment;

第二提取单元,用于根据字幕片段中每个词的出现频率,提取每个字幕片段的至少一个关键词;The second extraction unit is used to extract at least one keyword of each subtitle segment according to the frequency of occurrence of each word in the subtitle segment;

第一确定单元,用于将每个字幕片段的所述至少一个关键词确定为对应视频片段的内容标签。The first determining unit is configured to determine the at least one keyword of each subtitle segment as the content tag of the corresponding video segment.

可选地,作为一个实施例,所述电子设备还包括:Optionally, as an embodiment, the electronic device further includes:

第二获取单元,用于获取预设数量的视频剪辑样本,其中,所述视频剪辑样本为经过视频剪辑处理的视频,每个视频剪辑样本中包含至少一个剪辑素材;a second acquiring unit, configured to acquire a preset number of video clip samples, wherein the video clip samples are videos processed by video clips, and each video clip sample includes at least one clip material;

第三提取单元,用于提取每个视频剪辑样本中每个剪辑素材的至少一个素材特征,所述素材特征用于标识剪辑素材;a third extraction unit, configured to extract at least one material feature of each clip material in each video clip sample, where the material feature is used to identify the clip material;

第三获取单元,用于获取每个视频剪辑样本的风格标签;The third obtaining unit is used to obtain the style label of each video clip sample;

映射单元,用于将提取到的素材特征进行组合,并映射到对应的风格标签,得到每个风格标签对应的素材组合。The mapping unit is used to combine the extracted material features and map them to corresponding style tags to obtain the material combination corresponding to each style tag.

可选地,作为一个实施例,所述映射单元包括:Optionally, as an embodiment, the mapping unit includes:

统计子单元,用于对于获取到的每个风格标签Pi下的N类素材特征,统计每类素材特征中每个素材特征的使用次数;The statistics subunit is used to count the usage times of each material feature in each type of material feature for the N types of material features obtained under each style tag Pi;

确定子单元,用于确定每类素材特征中使用次数排在前M位的素材特征;Determining subunits for determining the top M material features in each type of material features;

组合子单元,用于将N类素材特征下的排在前M位的素材特征进行组合,得到MN个素材特征集合;The combining subunit is used to combine the top M material features under the N types of material features to obtain MN material feature sets;

计算子单元,用于计算每个素材特征集合的素材相关度,并将素材相关度排在前S位的素材特征集合映射到所述风格标签Pi,得到所述风格标签Pi对应的素材组合;a calculation subunit, used for calculating the material relevancy of each material feature set, and mapping the material feature set whose material relevancy is ranked in the top S position to the style tag Pi, to obtain the material combination corresponding to the style tag Pi;

其中,所述风格标签Pi为获取到的风格标签中的第i个风格标签,每个素材特征集合中包括N个素材特征、且各素材特征的类型不同,素材相关度为素材特征集合中所有素材特征的相关度,N、M和S均为大于1的整数。Wherein, the style tag Pi is the i-th style tag in the obtained style tags, each material feature set includes N material features, and the types of each material feature are different, and the material relevance is all the material features in the material feature set. The correlation degree of material features, N, M and S are all integers greater than 1.

可选地,作为一个实施例,所述电子设备还包括:Optionally, as an embodiment, the electronic device further includes:

第三接收单元,用于接收用户对所述第二视频的第三输入;a third receiving unit, configured to receive a third input from the user to the second video;

添加单元,用于响应于所述第三输入,为所述第二视频添加所述第三输入所输入的风格标签;An adding unit, configured to, in response to the third input, add the style tag input by the third input to the second video;

第二确定单元,用于将所述第二视频确定为所述预设数量的视频剪辑样本之一。A second determining unit, configured to determine the second video as one of the preset number of video clip samples.

可选地,作为一个实施例,所述剪辑素材包括以下至少一项:音频素材、滤镜、转场效果、特效、字幕风格和镜头切换频率。Optionally, as an embodiment, the editing material includes at least one of the following: audio material, filter, transition effect, special effect, subtitle style, and shot switching frequency.

可选地,作为一个实施例,所述合成单元包括:Optionally, as an embodiment, the synthesis unit includes:

提取子单元,用于在所述目标素材组合中包括音频素材的情况下,提取所述音频素材的音波波动频率;an extraction subunit, configured to extract the sonic fluctuation frequency of the audio material when the target material combination includes audio material;

显示子单元,用于根据所述音波波动频率,生成至少一个视频镜头切换频率方式备选项并显示;a display subunit, configured to generate and display at least one video lens switching frequency mode alternative according to the sound wave fluctuation frequency;

接收子单元,用于接收用户对所述至少一个视频镜头切换频率方式备选项中的目标视频镜头切换频率方式备选项的第四输入;a receiving subunit, configured to receive a fourth input from the user to the target video shot switching frequency mode alternative in the at least one video shot switching frequency mode alternative;

合成子单元,用于响应于所述第四输入,根据所述目标视频镜头切换频率方式备选项,将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。The synthesizing subunit is configured to, in response to the fourth input, synthesize the target video segment and the clip material in the target material combination according to the target video shot switching frequency mode option to generate a second video.

第三方面,本发明实施例还提供了一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述视频剪辑方法的步骤。In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program being executed by the processor The steps of implementing the above video clipping method when executed.

第四方面,本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现上述视频剪辑方法的步骤。In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above video editing method are implemented.

在本发明实施例中,在进行视频剪辑时,可以通过内容标签从待剪辑的第一视频中筛选出实际需要剪辑的目标视频片段,通过风格标签获取实际剪辑所需要的目标素材组合,对筛选出的目标视频片段和获取到的目标素材组合中的剪辑素材进行合成处理,得到剪辑作品,使得用户不再需要进行繁琐的搜索、对轨、组合等处理,简化了剪辑操作,提高了视频剪辑效率。In this embodiment of the present invention, when video editing is performed, the target video clips that actually need to be edited can be screened from the first video to be edited through the content tag, and the target material combination required for the actual editing can be obtained through the style tag. The output target video clip and the clip material in the obtained target material combination are synthesized and processed to obtain a clip work, so that the user no longer needs to perform cumbersome search, track alignment, combination and other processing, which simplifies the editing operation and improves the video editing. efficiency.

附图说明Description of drawings

图1是本发明实施例提供的视频剪辑方法的流程图;1 is a flowchart of a video editing method provided by an embodiment of the present invention;

图2是本发明实施例提供的内容标签集合生成方法的流程图;2 is a flowchart of a method for generating a content tag set provided by an embodiment of the present invention;

图3是本发明实施例提供的风格标签集合生成方法的流程图;3 is a flowchart of a method for generating a style tag set provided by an embodiment of the present invention;

图4是本发明实施例提供的视频剪辑界面的实例图;4 is an example diagram of a video editing interface provided by an embodiment of the present invention;

图5是本发明实施例提供的电子设备的结构示意图;5 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention;

图6是实现本发明各个实施例的一种电子设备的硬件结构示意图。FIG. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

视频剪辑是通过剪辑软件对视频源进行非线性的编辑合成,即将各种模拟素材及音效素材进行A/D数模转化,再将转化出来的数据进行存档之后,利用后期合成软件,例如,Premiere、会声会影等进行后期视频与音频以及特效图像的编辑合成;其中,非线性编辑技术可以为视频画面添加特效画面效果、音效效果、动画效果以及匹配字幕等,让影视作品更具质感和冲击感。常用的非线性编辑技术主要有画面编辑技术、声音编辑技术、蒙太奇技术以及特效效果添加技术等。Video editing is a non-linear editing and synthesis of video sources through editing software, that is, A/D digital-to-analog conversion of various analog materials and sound effects materials, and then archived data after conversion, using post-synthesis software, such as Premiere. , VideoStudio, etc. for post-editing and synthesis of video, audio and special effects images; among them, non-linear editing technology can add special effects, sound effects, animation effects and matching subtitles to video images, making film and television works more textured and Impact. The commonly used nonlinear editing technologies mainly include picture editing technology, sound editing technology, montage technology and special effect adding technology.

现有技术中,视频剪辑主要采用人工剪辑方式,在进行视频剪辑时,需要用户花费大量的时间进行视频速度、长度的对轨调整,转场效果的筛选,以及音频节奏的匹配等处理,操作比较繁琐,视频剪辑效率较低。In the prior art, video editing mainly adopts the manual editing method. When performing video editing, the user needs to spend a lot of time to adjust the speed and length of the video, screen the transition effect, and match the audio rhythm. It is cumbersome and the video editing efficiency is low.

为了解决上述技术问题,本发明实施例提供了一种视频剪辑方法及电子设备。下面首先对本发明实施例提供的视频剪辑方法进行介绍。In order to solve the above technical problems, embodiments of the present invention provide a video editing method and an electronic device. The following first introduces the video editing method provided by the embodiment of the present invention.

需要说明的是,本发明实施例提供的视频剪辑方法适用于电子设备,在实际应用中,该电子设备可以包括:智能手机、平板电脑、个人数字助理等移动终端,也可以包括:笔记本电脑、台式电脑、桌面机等计算机设备。It should be noted that the video editing method provided in the embodiment of the present invention is suitable for electronic equipment. In practical applications, the electronic equipment may include: mobile terminals such as smart phones, tablet computers, personal digital assistants, Desktop computers, desktop computers and other computer equipment.

图1是本发明实施例提供的视频剪辑方法的流程图,如图1所示,该方法可以包括以下步骤:步骤101、步骤102、步骤103、步骤104、步骤105和步骤106,其中,Fig. 1 is a flowchart of a video editing method provided by an embodiment of the present invention. As shown in Fig. 1 , the method may include the following steps: step 101, step 102, step 103, step 104, step 105, and step 106, wherein,

在步骤101中,显示待剪辑的第一视频的内容标签集合和风格标签集合,其中,内容标签集合包括至少一个内容标签,每个内容标签对应第一视频中的至少一个视频片段,风格标签集合包括至少一个风格标签,每个风格标签对应至少一个素材组合,每个素材组合中包括至少一个剪辑素材。In step 101, a set of content tags and a set of style tags of the first video to be edited are displayed, wherein the set of content tags includes at least one content tag, each content tag corresponds to at least one video clip in the first video, and the set of style tags It includes at least one style tag, each style tag corresponds to at least one material combination, and each material combination includes at least one clip material.

本发明实施例中,内容标签集合用于定位第一视频中的视频片段,其中,内容标签集合中的每个内容标签用于指示一个视频片段中的主要人物、关键的多个人物、剧情信息或视频内容、至少一个关键情节中的至少一项。In this embodiment of the present invention, the set of content tags is used to locate video segments in the first video, wherein each content tag in the set of content tags is used to indicate the main character, key multiple characters, and plot information in a video segment or at least one of video content, at least one key episode.

本发明实施例中,风格标签集合用于为用户推荐剪辑素材组合方案,其中,剪辑素材可以包括以下至少一项:音频素材、滤镜、转场效果、特效、字幕风格和镜头切换频率。In this embodiment of the present invention, the style tag set is used to recommend a combination scheme of editing materials for users, where the editing materials may include at least one of the following: audio materials, filters, transition effects, special effects, subtitle styles, and shot switching frequency.

为了便于理解,结合具体例子对内容标签集合和风格标签集合进行描述。在一个例子中,待剪辑的第一视频为“倚天屠龙记第30集”,内容标签集合中包括:内容标签“张无忌”、内容标签“赵敏”、内容标签“周芷若”等,其中,内容标签“张无忌”对应“倚天屠龙记第30集”中包含“张无忌”的视频片段,内容标签“赵敏”对应“倚天屠龙记第30集”中包含“赵敏”的视频片段,内容标签“周芷若”对应“倚天屠龙记第30集”中包含“周芷若”的视频片段。For ease of understanding, the content tag set and the style tag set are described with specific examples. In one example, the first video to be edited is "Episode 30 of Heaven and Dragon Slayer", and the content tag set includes: content tag "Zhang Wuji", content tag "Zhao Min", content tag "Zhou Zhiruo", etc., among which, the content tag "Zhang Wuji" corresponds to the video clip containing "Zhang Wuji" in "The 30th episode of Yitian Tulong Ji", the content tag "Zhao Min" corresponds to the video clip containing "Zhao Min" in "The 30th episode of Yitian Tulong Ji", and the content tag "Zhou Zhiruo" corresponds to "Eternal Dragon Slayer Episode 30" contains a video clip of "Zhou Zhiruo".

本发明实施例中,内容标签用于定位该内容标签对应的视频片段在待剪辑的第一视频中的具体位置。例如,选择内容标签“张无忌”,就可以定位到待剪辑的第一视频中包含“张无忌”的视频片段,即可以通过内容标签直接获得想要的视频片段。In the embodiment of the present invention, the content tag is used to locate the specific position of the video segment corresponding to the content tag in the first video to be edited. For example, by selecting the content tag "Zhang Wuji", the video clip containing "Zhang Wuji" in the first video to be edited can be located, that is, the desired video clip can be obtained directly through the content tag.

在另一个例子中,风格标签集合中包括:风格标签“武侠”、风格标签“欧美”、风格标签“TVB群像”等,其中,风格标签“武侠”对应一个素材组合,风格标签“欧美”对应一个素材组合,风格标签“TVB群像”对应一个素材组合。In another example, the style tag set includes: style tag "Martial Arts", style tag "Europe and America", style tag "TVB group portrait", etc., wherein the style tag "Martial Arts" corresponds to a material combination, and the style tag "Europe and America" corresponds to A material combination, the style tag "TVB group image" corresponds to a material combination.

本发明实施例中,可以通过风格标签,获取该风格标签对应的素材组合,例如,通过风格标签“武侠”,获取风格标签“武侠”对应的素材组合。In the embodiment of the present invention, the material combination corresponding to the style tag can be obtained through the style tag, for example, the material combination corresponding to the style tag "Martial Arts" can be obtained through the style tag "Martial Arts".

本发明实施例中,电子设备可以从服务器中获取待剪辑的第一视频的内容标签集合,即内容标签集合可以由服务器生成;或者电子设备也可以根据待剪辑的第一视频生成对应的内容标签集合,即内容标签集合可以由电子设备生成。In this embodiment of the present invention, the electronic device may acquire the content tag set of the first video to be edited from the server, that is, the content tag set may be generated by the server; or the electronic device may also generate a corresponding content tag according to the first video to be edited A collection, ie a collection of content tags, may be generated by an electronic device.

当内容标签集合由电子设备生成时,如图2所示,图2是本发明实施例提供的内容标签集合生成方法的流程图,该方法可以包括以下步骤:步骤201、步骤202、步骤203和步骤204,其中,When the content tag set is generated by an electronic device, as shown in FIG. 2 , FIG. 2 is a flowchart of a method for generating a content tag set provided by an embodiment of the present invention. The method may include the following steps: step 201 , step 202 , step 203 and Step 204, wherein,

在步骤201中,对第一视频中每个视频帧进行分类,得到至少一个视频片段,其中,每个视频片段内视频帧的类别相同。In step 201, each video frame in the first video is classified to obtain at least one video segment, wherein the video frames in each video segment are of the same category.

本发明实施例中,可以将用户导入的待剪辑的第一视频分解为视频帧序列,通过聚类算法处理视频帧序列,实现对第一视频中每个视频帧进行分类。相应的,上述步骤201具体可以包括以下步骤(图中未示出):步骤2011、步骤2012和步骤2013,其中,In the embodiment of the present invention, the first video to be edited imported by the user can be decomposed into a sequence of video frames, and the sequence of video frames is processed by a clustering algorithm to realize the classification of each video frame in the first video. Correspondingly, the above step 201 may specifically include the following steps (not shown in the figure): step 2011, step 2012 and step 2013, wherein,

在步骤2011中,将第一视频分解为视频帧;In step 2011, the first video is decomposed into video frames;

在步骤2012中,提取每个视频帧的图像特征;In step 2012, the image features of each video frame are extracted;

本发明实施例中,图像特征可以包括:颜色特征和/或纹理特征,其中,颜色特征为HSV颜色特征,H代表色调,S代表饱和度,V代表亮度,纹理特征可以采用LBP纹理特征算子对视频帧进行处理得到。In this embodiment of the present invention, the image features may include: color features and/or texture features, where the color features are HSV color features, H represents hue, S represents saturation, and V represents brightness, and the texture features may use the LBP texture feature operator The video frame is processed.

在步骤2013中,通过预设聚类算法,对提取到的图像特征进行聚类处理,得到每个视频帧的类别,将同一个类别的视频帧确定为一个视频片段。In step 2013, the extracted image features are clustered by a preset clustering algorithm to obtain the category of each video frame, and video frames of the same category are determined as a video segment.

本发明实施例中,预设聚类算法可以为K-means聚类算法,其中,K-means聚类算法的原理为:a)随机初始化,在n个数据对象中随机选取K个对象来作为初始的聚类中心;b)基于现有中心点更新分类,方式为一个点距离哪个类最近,就将该点分入这个类;c)基于现有分类更新类的中心点。方式为计算每个类中各个点的平均值;重复a)、b),直至整体误差达到设定的参数范围内。In this embodiment of the present invention, the preset clustering algorithm may be a K-means clustering algorithm, wherein the principle of the K-means clustering algorithm is: a) random initialization, randomly selecting K objects from n data objects as The initial cluster center; b) update the classification based on the existing center point, in the way that a point is closest to which class, the point is classified into this class; c) update the center point of the class based on the existing classification. The method is to calculate the average value of each point in each class; repeat a) and b) until the overall error reaches the set parameter range.

在步骤202中,提取每个视频片段的字幕片段。In step 202, a subtitle segment of each video segment is extracted.

在步骤203中,根据字幕片段中每个词的出现频率,提取每个字幕片段的至少一个关键词。In step 203, at least one keyword of each subtitle segment is extracted according to the frequency of occurrence of each word in the subtitle segment.

本发明实施例中,可以通过Word2vec词向量模型对每个字幕片段进行处理,得到每个字幕片段的至少一个关键词。具体地,采用Word2vec模型训练每一类别的词向量,通过计算每个词向量的概率,得到每一类别的关键词。In the embodiment of the present invention, each subtitle segment may be processed by using the Word2vec word vector model, to obtain at least one keyword of each subtitle segment. Specifically, the Word2vec model is used to train the word vectors of each category, and the keywords of each category are obtained by calculating the probability of each word vector.

在步骤204中,将每个字幕片段的所述至少一个关键词确定为对应视频片段的内容标签。In step 204, the at least one keyword of each subtitle segment is determined as the content tag of the corresponding video segment.

在一个例子中,“倚天屠龙记第30集”4分28秒到6分30秒的视频片段被聚为一类,通过词向量训练,根据概率阈值设定,得到满足设定阈值的四个词:(张无忌,-27.9013707845)、(赵敏,-28.1072913493)、(周芷若,-30.482187911)、(光明顶,-36.3372344659),将这四个词作为对应视频片段的关键词。In one example, the video clips from 4 minutes 28 seconds to 6 minutes 30 seconds of "The 30th episode of "Ei Tian Tu Long Ji" are grouped into one category, and through word vector training, according to the probability threshold setting, four words that meet the set threshold are obtained. : (Zhang Wuji, -27.9013707845), (Zhao Min, -28.1072913493), (Zhou Zhiruo, -30.482187911), (Guangmingding, -36.3372344659), use these four words as the keywords of the corresponding video clips.

需要说明的是,服务器也可以采用上述步骤201至步骤204中的处理操作生成内容标签集合。It should be noted that, the server may also generate the content tag set by using the processing operations in the foregoing steps 201 to 204 .

可见,本发明实施例中,可以通过K-means聚类算法和Word2vec词向量模型处理第一视频和第一视频中各视频片段的字幕片段,生成内容标签,进而生成内容标签集合,以便用户可以通过内容标签,快速地定位出第一视频中实际需要剪辑的视频片段。It can be seen that in this embodiment of the present invention, the K-means clustering algorithm and the Word2vec word vector model can be used to process the first video and the subtitle segments of each video clip in the first video to generate content tags, and then generate a content tag set, so that users can Through the content tag, the video segment that actually needs to be edited in the first video can be quickly located.

本发明实施例中,电子设备可以从服务器中获取风格标签集合,即风格标签集合可以由服务器生成;或者电子设备也可以自行生成风格标签集合。In this embodiment of the present invention, the electronic device may acquire the style tag set from the server, that is, the style tag set may be generated by the server; or the electronic device may generate the style tag set by itself.

本发明实施例中,可以收集海量视频剪辑作品,提取每个视频剪辑作品的素材特征和获取每个视频剪辑作品的风格标签,通过算法自动学习匹配剪辑素材和风格标签,生成素材组合推荐方案,形成风格标签集合,推荐给用户,使得用户可以通过风格标签来获得与自己需求相匹配的素材组合,而不再需要通过不断地微调去优化剪辑效果。In the embodiment of the present invention, a large number of video clip works can be collected, the material features of each video clip work can be extracted, and the style tag of each video clip work can be obtained, and the matching clip material and style tag can be automatically learned through an algorithm to generate a material combination recommendation scheme. A collection of style tags is formed and recommended to users, so that users can obtain material combinations that match their needs through style tags, instead of constantly fine-tuning to optimize the editing effect.

当风格标签集合由电子设备生成时,如图3所示,图3是本发明实施例提供的风格标签集合生成方法的流程图,该方法可以包括以下步骤:步骤301、步骤302、步骤303和步骤304,其中,When the style tag set is generated by an electronic device, as shown in FIG. 3 , FIG. 3 is a flowchart of a method for generating a style tag set provided by an embodiment of the present invention. The method may include the following steps: step 301 , step 302 , step 303 and Step 304, wherein,

在步骤301中,获取预设数量的视频剪辑样本,其中,视频剪辑样本为经过视频剪辑处理的视频,每个视频剪辑样本中包含至少一个剪辑素材。In step 301, a preset number of video clip samples are acquired, wherein the video clip samples are videos that have undergone video clip processing, and each video clip sample includes at least one clip material.

本发明实施例中,可以获取海量的视频剪辑样本,其中,视频剪辑样本为视频剪辑作品。In this embodiment of the present invention, a large number of video clip samples can be obtained, wherein the video clip samples are video clip works.

在步骤302中,提取每个视频剪辑样本中每个剪辑素材的至少一个素材特征,其中,素材特征用于标识剪辑素材。In step 302, at least one material feature of each clip material in each video clip sample is extracted, wherein the material feature is used to identify the clip material.

本发明实施例中,当剪辑素材为音频素材时,素材特征可以为音频名称;当剪辑素材为滤镜时,素材特征为滤镜类型;当剪辑素材为转场效果时,素材特征为转场效果类型;当剪辑素材为特效时,素材特征为特效类型;当剪辑素材为字幕风格时,素材特征为字幕风格类型;当剪辑素材为镜头切换频率时,素材特征为具体数值。In the embodiment of the present invention, when the editing material is an audio material, the material feature may be an audio name; when the editing material is a filter, the material feature is a filter type; when the editing material is a transition effect, the material feature is a transition Effect type; when the editing material is special effects, the material characteristic is the special effect type; when the editing material is the subtitle style, the material characteristic is the subtitle style type; when the editing material is the shot switching frequency, the material characteristic is the specific value.

在步骤303中,获取每个视频剪辑样本的风格标签。In step 303, the style tag of each video clip sample is obtained.

本发明实施例中,风格标签用于标识视频剪辑样本的风格,风格标签可以来源于海量视频数据的标签收集和人为标注。In the embodiment of the present invention, the style tag is used to identify the style of the video clip sample, and the style tag can be derived from tag collection and manual annotation of massive video data.

在步骤304中,将提取到的素材特征进行组合,并映射到对应的风格标签,得到每个风格标签对应的素材组合。In step 304, the extracted material features are combined and mapped to corresponding style tags to obtain a material combination corresponding to each style tag.

本发明实施例中,可以通过多标签映射规则将海量视频剪辑样本的音频源、滤镜、转场效果、特效、字幕风格、镜头切换频率等剪辑素材进行组合,并映射到多个风格标签,形成对应的素材组合推荐方案,用户可以根据风格标签筛选获得推荐的素材组合方案。In this embodiment of the present invention, editing materials such as audio sources, filters, transition effects, special effects, subtitle styles, and shot switching frequencies of massive video clip samples can be combined through multi-label mapping rules, and mapped to multiple style labels, A corresponding material combination recommendation scheme is formed, and the user can obtain the recommended material combination scheme by filtering according to the style tag.

相应的,上述步骤303具体可以包括以下步骤(图中未示出):步骤3031、步骤3032、步骤3033和步骤3034,其中,Correspondingly, the above step 303 may specifically include the following steps (not shown in the figure): step 3031, step 3032, step 3033 and step 3034, wherein,

在步骤3031中,对于获取到的每个风格标签Pi下的N类素材特征,统计每类素材特征中每个素材特征的使用次数。In step 3031, for the acquired N types of material features under each style tag Pi, count the usage times of each material feature in each type of material feature.

为了便于理解,以风格标签“武侠”为例,风格标签“武侠”下有6类(即N=6)剪辑素材:音频素材、滤镜、转场效果、特效、字幕风格和镜头切换频率,对应6类素材特征,统计音频素材下所有音频的使用次数,统计滤镜素材下所有滤镜的使用次数,统计转场效果素材下所有转场效果的使用次数,统计特效素材下所有特效的使用次数,统计字幕风格素材下所有字幕风格的使用次数,统计镜头切换频率素材下所有镜头切换频率的使用次数。For ease of understanding, take the style label "Martial Arts" as an example, there are 6 types of editing materials (ie N=6) under the style label "Martial Arts": audio material, filter, transition effect, special effect, subtitle style and shot switching frequency, Corresponding to 6 types of material features, count the usage times of all audio under audio material, count the usage times of all filters under filter material, count the usage times of all transition effects under transition effect material, count the usage of all special effects under special effect material Times, counts the usage times of all subtitle styles under the subtitle style material, and counts the usage times of all shot switching frequencies under the shot switching frequency material.

以统计音频素材下所有音频的使用次数为例,如下表1所示,表1示出了风格标签“武侠”下各音频素材的使用情况:Taking the statistics of the usage times of all audios under the audio material as an example, as shown in Table 1 below, Table 1 shows the usage of each audio material under the style label "Martial Arts":

排名ranking音频名称audio name使用次数usage count11《难念的经》"The Difficult Scripture"5948594822《铁血丹心》"Iron Blood"4655465533《刀剑如梦》"Sword Like a Dream"4342434244《归去来》"Homecoming"3062306255《任逍遥》"Let it ride"2856285666《倾城一笑》"Alluring Smile"2565256577《惊鸿一面》"Amazing Side"2130213088《万神纪》"Pantheon"2003200399《天将明》"Dawn"198619861010《天地不容》"Incomparable"19751975

表1Table 1

同理,也可以统计得到风格标签“武侠”下其他类型的素材的使用次数。In the same way, you can also count the usage times of other types of materials under the style tag "Martial Arts".

在步骤3032中,确定每类素材特征中使用次数排在前M位的素材特征;In step 3032, determine the top M material features in each type of material features in terms of usage times;

仍以风格标签“武侠”下音频素材为例,如果M=3,则确定音频素材中使用次数排在前3位的音频依次为:《难念的经》、《铁血丹心》和《刀剑如梦》。同理,可以确定出排在前3位的滤镜、排在前3位的转场效果、排在前3位的特效、排在前3位的字幕风格和排在前3位的镜头切换频率。Still taking the audio material under the style label "Martial Arts" as an example, if M=3, then determine the audio material with the top 3 most frequently used audios: "The Classic of Difficulty to Read", "Iron Blood Danxin" and "Swords such as Swords". Dream". In the same way, the top 3 filters, the top 3 transition effects, the top 3 special effects, the top 3 subtitle styles, and the top 3 shot transitions can be determined. frequency.

在步骤3033中,将N类素材特征下的排在前M位的素材特征进行组合,得到MN个素材特征集合,其中,每个素材特征集合中包括N个素材特征、且各素材特征的类型不同;In step 3033, the top M material features under the N types of material features are combined to obtain MN material feature sets, wherein each material feature set includes N material features, and the different types;

例如,排在前3位的音频为{音频1、音频2、音频3}、排在前3位的滤镜{滤镜1、滤镜2、滤镜3}、排在前3位的转场效果{转场效果1、转场效果2、转场效果3}、排在前3位的特效{特效1、特效2、特效3}、排在前3位的字幕风格{字幕风格1、字幕风格2、字幕风格3}和排在前3位的镜头切换频率{频率1、频率2,频率3},进行组合得到36个素材特征集合,分别为:{音频1、滤镜1、转场效果1、特效1、字幕风格1、频率1},…,{音频3、滤镜3、转场效果3、特效3、字幕风格3、频率3}。For example, the top 3 audios are {audio 1, audio 2, audio 3}, the top 3 filters are {filter 1, filter 2, filter 3}, the top 3 transitions Field Effect {Transition Effect 1, Transition Effect 2, Transition Effect 3}, Top 3 Effects {Effect 1, Effect 2, Effect 3}, Top 3 Subtitle Styles {Subtitle Style 1, Subtitle style 2, subtitle style 3} and the top 3 shot switching frequencies {frequency 1, frequency 2, frequency3 } are combined to obtain 36 material feature sets, namely: {audio 1, filter 1, transition effect 1, special effect 1, subtitle style 1, frequency 1}, ..., {audio 3, filter 3, transition effect 3, special effect 3, subtitle style 3, frequency 3}.

在步骤3034中,计算每个素材特征集合的素材相关度,并将素材相关度排在前S位的素材特征集合映射到风格标签Pi,得到风格标签Pi对应的素材组合;其中,风格标签Pi为获取到的风格标签中的第i个风格标签,每个素材特征集合中包括N个素材特征、且各素材特征的类型不同,素材相关度为素材特征集合中所有素材特征的相关度,N、M和S均为大于1的整数。In step 3034, the material relevancy of each material feature set is calculated, and the material feature set whose material relevancy is ranked in the top S is mapped to the style tag Pi, and the material combination corresponding to the style tag Pi is obtained; wherein, the style tag Pi is the i-th style tag in the acquired style tags, each material feature set includes N material features, and the types of each material feature are different, and the material correlation is the correlation of all the material features in the material feature set, N , M and S are all integers greater than 1.

本发明实施例中,在计算素材相关度时,需要为每个素材特征分配一个权重,仍以风格标签“武侠”为例,音频权重为0.3,滤镜权重为0.2,转场权重为0.2,特效权重为0.1,字幕效果权重为0.1,镜头切换频率为0.1。In the embodiment of the present invention, when calculating the material relevancy, it is necessary to assign a weight to each material feature. Taking the style tag "Martial Arts" as an example, the audio weight is 0.3, the filter weight is 0.2, and the transition weight is 0.2. The special effect weight is 0.1, the subtitle effect weight is 0.1, and the shot switching frequency is 0.1.

根据计算公式:(音频权重*使用数量+滤镜权重*使用数量)*两者相关系数+(滤镜权重*使用数量+转场权重*使用数量)*两者相关系数+…,计算素材相关度;其中,两个素材(例如音频素材与滤镜素材)的相关系数的计算过程包括:排在前3位的音频为{音频1、音频2、音频3},排在前3位的滤镜{滤镜1、滤镜2、滤镜3},构造音频使用次数向量(音频1的使用次数、音频2的使用次数、音频3的使用次数),构造滤镜使用次数向量(滤镜1的使用次数、滤镜2的使用次数、滤镜3的使用次数),对向量(音频1的使用次数、音频2的使用次数、音频3的使用次数)进行标准化处理,对向量(滤镜1的使用次数、滤镜2的使用次数、滤镜3的使用次数)进行标准化处理,计算两个标准化处理后的向量的用皮尔逊系数。According to the calculation formula: (audio weight * number of use + filter weight * number of use) * correlation coefficient of the two + (filter weight * number of use + transition weight * number of use) * correlation coefficient of the two + ..., calculate the material correlation degree; wherein, the calculation process of the correlation coefficient of two materials (such as audio material and filter material) includes: the top 3 audios are {audio 1, audio 2, audio 3}, and the top 3 audios are {audio 1, audio 2, audio 3}; Mirror {filter 1, filter 2, filter 3}, construct a vector of times of use of audio (the times of use of audio 1, times of use of audio 2, times of use of audio 3), construct a vector of times of use of filters (filter 1 The number of uses of audio 1, the number of times of use of filter 2, the number of times of use of filter 3), normalize the vector (the number of times of use of audio 1, the number of times of use of audio 2, the number of times of use of audio 3), and the vector (filter 1 The number of times of use of filter 2, the number of times of use of filter 2, and the number of times of use of filter 3) are normalized, and the Pearson coefficient of the two normalized vectors is calculated.

可见,本发明实施例中,可以对每个风格标签下的高频素材特征进行组合,生成素材组合方案。It can be seen that, in the embodiment of the present invention, the high-frequency material features under each style tag can be combined to generate a material combination scheme.

需要说明的是,服务器也可以采用上述步骤301至步骤304中的处理操作生成风格标签集合。It should be noted that, the server may also generate the style tag set by using the processing operations in the above steps 301 to 304 .

可见,本发明实施例中,可以收集海量视频剪辑作品,提取每个视频剪辑作品的素材特征和风格标签,通过算法自动学习匹配剪辑素材和风格标签,生成风格标签集合,并推荐给用户,使得用户可以通过风格标签来获得该风格下推荐的素材组合,而不再需要通过不断的微调去优化剪辑效果。It can be seen that, in this embodiment of the present invention, a large number of video clip works can be collected, the material features and style tags of each video clip work can be extracted, and the clip materials and style tags can be automatically learned and matched through algorithms to generate a set of style tags, and recommend them to users, so that Users can obtain the recommended material combination under the style through the style tag, and no longer need to optimize the editing effect through continuous fine-tuning.

在步骤102中,接收用户对内容标签集合的第一输入。In step 102, a first input from a user for a set of content tags is received.

本发明实施例中,第一输入用于从内容标签集合中选取目标内容标签,其中,第一输入可以为点击操作。In this embodiment of the present invention, the first input is used to select a target content tag from the content tag set, where the first input may be a click operation.

在步骤103中,响应于第一输入,从第一视频中截取第一输入所选取的目标内容标签对应的目标视频片段。In step 103, in response to the first input, a target video segment corresponding to the target content tag selected by the first input is intercepted from the first video.

本发明实施例中,用户可以通过目标内容标签,直接从第一视频中定位出目标内容标签对应的目标视频片段并截取出来,而不必用户进行手动搜索目标视频片段。In this embodiment of the present invention, the user can directly locate the target video segment corresponding to the target content label from the first video through the target content label and cut it out, without the user having to manually search for the target video segment.

在步骤104中,接收用户对风格标签集合的第二输入。In step 104, a second input from the user for the set of style tags is received.

本发明实施例中,第二输入用于从风格标签集合中选取目标风格标签,其中,第二输入可以为点击操作。In this embodiment of the present invention, the second input is used to select a target style tag from the style tag set, where the second input may be a click operation.

在步骤105中,响应于第二输入,获取第二输入所选取的目标风格标签对应的目标素材组合。In step 105, in response to the second input, the target material combination corresponding to the target style tag selected by the second input is obtained.

本发明实施例中,用户可以通过风格标签,获取系统推荐的素材组合方案,由于系统推荐的素材组合方案是基于大数据生成的,因此推荐结果比较合理,而不必用户去逐个搜索、匹配素材。In the embodiment of the present invention, the user can obtain the material combination scheme recommended by the system through the style tag. Since the material combination scheme recommended by the system is generated based on big data, the recommendation result is more reasonable, and the user does not need to search and match the materials one by one.

在步骤106中,将目标视频片段和目标素材组合中的剪辑素材进行合成,生成第二视频。In step 106, the target video segment and the clip material in the target material combination are synthesized to generate a second video.

本发明实施例中,还可以针对每个用户,学习用户新完成的剪辑作品的特征及风格,反作用于风格标签的生成过程,不断优化风格标签集合,此时,在上述步骤106之后,还可以增加以下步骤:In the embodiment of the present invention, the characteristics and styles of the editing works newly completed by the user can also be learned for each user, which can react to the generation process of the style tags, and continuously optimize the style tag set. At this time, after the above step 106, you can also Add the following steps:

接收用户对第二视频的第三输入;其中,第三输入用于为第二视频添加风格标签;receiving a third input from the user to the second video; wherein the third input is used to add a style tag to the second video;

响应于第三输入,为第二视频添加第三输入所输入的风格标签;in response to the third input, adding the style tag entered by the third input to the second video;

将第二视频确定为预设数量的视频剪辑样本之一。The second video is determined to be one of a preset number of video clip samples.

可见,本发明实施例中,用户可以对新作品添加风格标签反作用于映射规则计算结果去优化特征组合映射标签的推荐方案,该方案可以跟随当下流行和个人偏好进行学习调整,输出千人千面的视频剪辑推荐方案。It can be seen that, in the embodiment of the present invention, the user can add style tags to new works and react to the calculation result of the mapping rule to optimize the recommendation scheme of the feature combination mapping tag. This scheme can learn and adjust according to the current popularity and personal preferences, and output thousands of faces video clip recommendations.

本发明实施例中,如果素材组合中包括音频素材,则可以根据音频素材的音轨起伏变化信息推荐剪辑方案,此时,上述步骤106具体可以包括以下步骤:In this embodiment of the present invention, if the material combination includes audio material, a clipping scheme may be recommended according to the track fluctuation information of the audio material. In this case, the above step 106 may specifically include the following steps:

根据音波波动频率,生成至少一个视频镜头切换频率方式备选项并显示;According to the frequency of sound wave fluctuation, generate at least one video lens switching frequency mode alternative and display it;

接收用户对至少一个视频镜头切换频率方式备选项中的目标视频镜头切换频率方式备选项的第四输入;receiving a fourth input from the user to the target video shot switching frequency mode option in the at least one video shot switching frequency mode option;

响应于第四输入,根据目标视频镜头切换频率方式备选项,将目标视频片段和目标素材组合中的剪辑素材进行合成,生成第二视频。In response to the fourth input, the target video clip and the clip material in the target material combination are synthesized according to the target video shot switching frequency mode alternative to generate the second video.

可见,本发明实施例中,根据音频素材的音波波动频率,生成剪辑方式推荐方案供用户选择,可以降低制作高质量剪辑视频作品的操作难度,减少视频剪辑的时间成本。It can be seen that, in the embodiment of the present invention, according to the frequency of sound wave fluctuations of the audio material, a recommended editing scheme is generated for the user to choose, which can reduce the operation difficulty of producing high-quality editing video works and reduce the time cost of video editing.

在一个例子中,如图4所示,图4为视频剪辑界面的实例图,视频剪辑界面40中包括:内容标签集合所在的区域41、风格标签集合所在的区域42、视频识别模块所在的区域43、视频预览界面44和音视频轨区域45,其中,可以将第一视频拖动到区域43,生成对应的内容标签集合。从目标风格标签对应的素材组合中选择音频素材放入音频轨道,可以根据音频素材的音波波动频率,给出视频镜头切换频率方式备选项并展示在视频轨道1中,具体地,可以自动给出视频镜头切换频率方式:视频速率0.8倍、视频长度3秒,如果用户同意该视频镜头切换频率方式,则按照该视频镜头切换频率方式生成剪辑作品,最后可在视频预览界面44中查看剪辑作品的预览效果。In one example, as shown in FIG. 4 , which is an example diagram of a video editing interface, the video editing interface 40 includes: an area 41 where the content tag set is located, an area 42 where the style tag set is located, and an area where the video recognition module is located 43. The video preview interface 44 and the audio and video track area 45, wherein the first video can be dragged to the area 43 to generate a corresponding content tag set. Select the audio material from the material combination corresponding to the target style tag and put it into the audio track. According to the sound wave fluctuation frequency of the audio material, an option for the switching frequency of the video shot can be given and displayed in the video track 1. Specifically, it can be automatically given. Video shot switching frequency mode: the video rate is 0.8 times, and the video length is 3 seconds. If the user agrees with the video shot switching frequency mode, the editing work will be generated according to the video shot switching frequency mode. Finally, the editing work can be viewed in the video preview interface 44. Preview the effect.

由上述实施例可见,该实施例中,在进行视频剪辑时,可以通过内容标签从待剪辑的第一视频中筛选出实际需要剪辑的目标视频片段,通过风格标签获取实际剪辑所需要的目标素材组合,对筛选出的目标视频片段和获取到的目标素材组合中的剪辑素材进行合成处理,得到剪辑作品,使得用户不再需要进行繁琐的搜索、对轨、组合等处理,简化了剪辑操作,提高了视频剪辑效率。It can be seen from the above embodiment that in this embodiment, when performing video editing, the target video clips that actually need to be edited can be screened from the first video to be edited through the content tag, and the target material required for actual editing can be obtained through the style tag. Combining, synthesizing the selected target video clips and the clipping materials in the obtained target material combination to obtain a clipping work, so that the user no longer needs to perform cumbersome search, track alignment, combination and other processing, which simplifies the editing operation. Improved video editing efficiency.

图5是本发明实施例提供的电子设备的结构示意图,如图5所示,电子设备500可以包括:显示单元501、第一接收单元502、截取单元503、第二接收单元504、第一获取单元505和合成单元506,其中,FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention. As shown in FIG. 5 , the electronic device 500 may include: a display unit 501 , a first receiving unit 502 , an intercepting unit 503 , a second receiving unit 504 , and a first acquiring unit unit 505 and synthesis unit 506, where,

显示单元501,用于显示待剪辑的第一视频的内容标签集合和风格标签集合,其中,所述内容标签集合包括至少一个内容标签,每个内容标签对应所述第一视频中的至少一个视频片段,所述风格标签集合包括至少一个风格标签,每个风格标签对应至少一个素材组合,每个素材组合中包括至少一个剪辑素材;A display unit 501, configured to display a set of content tags and a set of style tags of the first video to be edited, wherein the set of content tags includes at least one content tag, and each content tag corresponds to at least one video in the first video clips, the style tag set includes at least one style tag, each style tag corresponds to at least one material combination, and each material combination includes at least one clip material;

第一接收单元502,用于接收用户对所述内容标签集合的第一输入;a first receiving unit 502, configured to receive a user's first input to the content tag set;

截取单元503,用于响应于所述第一输入,从所述第一视频中截取所述第一输入所选取的目标内容标签对应的目标视频片段;An interception unit 503, configured to intercept the target video segment corresponding to the target content tag selected by the first input from the first video in response to the first input;

第二接收单元504,用于接收用户对所述风格标签集合的第二输入;A second receiving unit 504, configured to receive a second input from the user on the style tag set;

第一获取单元505,用于响应于所述第二输入,获取所述第二输入所选取的目标风格标签对应的目标素材组合;a first obtaining unit 505, configured to obtain, in response to the second input, the target material combination corresponding to the target style tag selected by the second input;

合成单元506,用于将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。The synthesizing unit 506 is configured to synthesize the target video segment and the clip material in the target material combination to generate a second video.

由上述实施例可见,该实施例中,在进行视频剪辑时,可以通过内容标签从待剪辑的第一视频中筛选出实际需要剪辑的目标视频片段,通过风格标签获取实际剪辑所需要的目标素材组合,对筛选出的目标视频片段和获取到的目标素材组合中的剪辑素材进行合成处理,得到剪辑作品,使得用户不再需要进行繁琐的搜索、对轨、组合等处理,简化了剪辑操作,提高了视频剪辑效率。It can be seen from the above embodiment that in this embodiment, when performing video editing, the target video clips that actually need to be edited can be screened from the first video to be edited through the content tag, and the target material required for actual editing can be obtained through the style tag. Combining, synthesizing the selected target video clips and the clipping materials in the obtained target material combination to obtain a clipping work, so that the user no longer needs to perform cumbersome search, track alignment, combination and other processing, which simplifies the editing operation. Improved video editing efficiency.

可选地,作为一个实施例,所述电子设备500,还可以包括:Optionally, as an embodiment, the electronic device 500 may further include:

分类单元,用于对所述第一视频中每个视频帧进行分类,得到至少一个视频片段,每个视频片段内视频帧的类别相同;A classification unit, configured to classify each video frame in the first video, to obtain at least one video segment, and the category of the video frame in each video segment is the same;

第一提取单元,用于提取每个视频片段的字幕片段;The first extraction unit, for extracting the subtitle segment of each video segment;

第二提取单元,用于根据字幕片段中每个词的出现频率,提取每个字幕片段的至少一个关键词;The second extraction unit is used to extract at least one keyword of each subtitle segment according to the frequency of occurrence of each word in the subtitle segment;

第一确定单元,用于将每个字幕片段的所述至少一个关键词确定为对应视频片段的内容标签。The first determining unit is configured to determine the at least one keyword of each subtitle segment as the content tag of the corresponding video segment.

可选地,作为一个实施例,所述电子设备500,还可以包括:Optionally, as an embodiment, the electronic device 500 may further include:

第二获取单元,用于获取预设数量的视频剪辑样本,其中,所述视频剪辑样本为经过视频剪辑处理的视频,每个视频剪辑样本中包含至少一个剪辑素材;a second acquiring unit, configured to acquire a preset number of video clip samples, wherein the video clip samples are videos processed by video clips, and each video clip sample includes at least one clip material;

第三提取单元,用于提取每个视频剪辑样本中每个剪辑素材的至少一个素材特征,所述素材特征用于标识剪辑素材;a third extraction unit, configured to extract at least one material feature of each clip material in each video clip sample, where the material feature is used to identify the clip material;

第三获取单元,用于获取每个视频剪辑样本的风格标签;The third obtaining unit is used to obtain the style label of each video clip sample;

映射单元,用于将提取到的素材特征进行组合,并映射到对应的风格标签,得到每个风格标签对应的素材组合。The mapping unit is used to combine the extracted material features and map them to corresponding style tags to obtain the material combination corresponding to each style tag.

可选地,作为一个实施例,所述映射单元,可以包括:Optionally, as an embodiment, the mapping unit may include:

统计子单元,用于对于获取到的每个风格标签Pi下的N类素材特征,统计每类素材特征中每个素材特征的使用次数;The statistics subunit is used to count the usage times of each material feature in each type of material feature for the N types of material features obtained under each style tag Pi;

确定子单元,用于确定每类素材特征中使用次数排在前M位的素材特征;Determining subunits for determining the top M material features in each type of material features;

组合子单元,用于将N类素材特征下的排在前M位的素材特征进行组合,得到MN个素材特征集合;The combining subunit is used to combine the top M material features under the N types of material features to obtain MN material feature sets;

计算子单元,用于计算每个素材特征集合的素材相关度,并将素材相关度排在前S位的素材特征集合映射到所述风格标签Pi,得到所述风格标签Pi对应的素材组合;a calculation subunit, used for calculating the material relevancy of each material feature set, and mapping the material feature set whose material relevancy is ranked in the top S position to the style tag Pi, to obtain the material combination corresponding to the style tag Pi;

其中,所述风格标签Pi为获取到的风格标签中的第i个风格标签,每个素材特征集合中包括N个素材特征、且各素材特征的类型不同,素材相关度为素材特征集合中所有素材特征的相关度,N、M和S均为大于1的整数。Wherein, the style tag Pi is the i-th style tag in the obtained style tags, each material feature set includes N material features, and the types of each material feature are different, and the material relevance is all the material features in the material feature set. The correlation degree of material features, N, M and S are all integers greater than 1.

可选地,作为一个实施例,所述电子设备500,还可以包括:Optionally, as an embodiment, the electronic device 500 may further include:

第三接收单元,用于接收用户对所述第二视频的第三输入;a third receiving unit, configured to receive a third input from the user to the second video;

添加单元,用于响应于所述第三输入,为所述第二视频添加所述第三输入所输入的风格标签;An adding unit, configured to, in response to the third input, add the style tag input by the third input to the second video;

第二确定单元,用于将所述第二视频确定为所述预设数量的视频剪辑样本之一。A second determining unit, configured to determine the second video as one of the preset number of video clip samples.

可选地,作为一个实施例,所述剪辑素材可以包括以下至少一项:音频素材、滤镜、转场效果、特效、字幕风格和镜头切换频率。Optionally, as an embodiment, the editing material may include at least one of the following: audio material, filter, transition effect, special effect, subtitle style, and shot switching frequency.

可选地,作为一个实施例,所述合成单元506,可以包括:Optionally, as an embodiment, the synthesizing unit 506 may include:

提取子单元,用于在所述目标素材组合中包括音频素材的情况下,提取所述音频素材的音波波动频率;an extraction subunit, configured to extract the sonic fluctuation frequency of the audio material when the target material combination includes audio material;

显示子单元,用于根据所述音波波动频率,生成至少一个视频镜头切换频率方式备选项并显示;a display subunit, configured to generate and display at least one video lens switching frequency mode alternative according to the sound wave fluctuation frequency;

接收子单元,用于接收用户对所述至少一个视频镜头切换频率方式备选项中的目标视频镜头切换频率方式备选项的第四输入;a receiving subunit, configured to receive a fourth input from the user to the target video shot switching frequency mode alternative in the at least one video shot switching frequency mode alternative;

合成子单元,用于响应于所述第四输入,根据所述目标视频镜头切换频率方式备选项,将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。The synthesizing subunit is configured to, in response to the fourth input, synthesize the target video segment and the clip material in the target material combination according to the target video shot switching frequency mode option to generate a second video.

图6是实现本发明各个实施例的一种电子设备的硬件结构示意图,如图6所示,该电子设备600包括但不限于:射频单元601、网络模块602、音频输出单元603、输入单元604、传感器605、显示单元606、用户输入单元607、接口单元608、存储器609、处理器610、以及电源611等部件。本领域技术人员可以理解,图6中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本发明实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。FIG. 6 is a schematic diagram of the hardware structure of an electronic device implementing various embodiments of the present invention. As shown in FIG. 6 , the electronic device 600 includes but is not limited to: a radio frequency unit 601 , a network module 602 , an audio output unit 603 , and an input unit 604 , sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, processor 610, and power supply 611 and other components. Those skilled in the art can understand that the structure of the electronic device shown in FIG. 6 does not constitute a limitation on the electronic device, and the electronic device may include more or less components than the one shown, or combine some components, or different components layout. In this embodiment of the present invention, the electronic device includes but is not limited to a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.

其中,处理器610,用于显示待剪辑的第一视频的内容标签集合和风格标签集合,其中,所述内容标签集合包括至少一个内容标签,每个内容标签对应所述第一视频中的至少一个视频片段,所述风格标签集合包括至少一个风格标签,每个风格标签对应至少一个素材组合,每个素材组合中包括至少一个剪辑素材;接收用户对所述内容标签集合的第一输入;响应于所述第一输入,从所述第一视频中截取所述第一输入所选取的目标内容标签对应的目标视频片段;接收用户对所述风格标签集合的第二输入;响应于所述第二输入,获取所述第二输入所选取的目标风格标签对应的目标素材组合;将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。The processor 610 is configured to display a content tag set and a style tag set of the first video to be edited, wherein the content tag set includes at least one content tag, and each content tag corresponds to at least one content tag in the first video. a video clip, the style tag set includes at least one style tag, each style tag corresponds to at least one material combination, and each material combination includes at least one clip material; receiving a user's first input to the content tag set; responding For the first input, intercept the target video segment corresponding to the target content tag selected by the first input from the first video; receive the user's second input on the style tag set; in response to the first input The second input is to obtain the target material combination corresponding to the target style tag selected by the second input; the target video segment and the clip material in the target material combination are synthesized to generate a second video.

本发明实施例中,在进行视频剪辑时,可以通过内容标签从待剪辑的第一视频中筛选出实际需要剪辑的目标视频片段,通过风格标签获取实际剪辑所需要的目标素材组合,对筛选出的目标视频片段和获取到的目标素材组合中的剪辑素材进行合成处理,得到剪辑作品,使得用户不再需要进行繁琐的搜索、对轨、组合等处理,简化了剪辑操作,提高了视频剪辑效率。In this embodiment of the present invention, when video editing is performed, the target video clips that actually need to be edited can be screened from the first video to be edited through the content tag, and the target material combination required for the actual editing can be obtained through the style tag, and the selected The target video clip and the clip material in the obtained target material combination are synthesized and processed to obtain the clip work, so that the user no longer needs to perform cumbersome search, track alignment, combination and other processing, which simplifies the editing operation and improves the video editing efficiency. .

可选地,作为一个实施例,所述显示待剪辑的第一视频的内容标签集合和风格标签集合之前,还包括:Optionally, as an embodiment, before displaying the content tag set and style tag set of the first video to be edited, the method further includes:

对所述第一视频中每个视频帧进行分类,得到至少一个视频片段,每个视频片段内视频帧的类别相同;Classifying each video frame in the first video to obtain at least one video segment, and the category of the video frame in each video segment is the same;

提取每个视频片段的字幕片段;Extract subtitle segments for each video segment;

根据字幕片段中每个词的出现频率,提取每个字幕片段的至少一个关键词;Extract at least one keyword of each subtitle segment according to the frequency of occurrence of each word in the subtitle segment;

将每个字幕片段的所述至少一个关键词确定为对应视频片段的内容标签。The at least one keyword of each subtitle segment is determined as the content tag of the corresponding video segment.

可选地,作为一个实施例,所述显示待剪辑的第一视频的内容标签集合和风格标签集合之前,还包括:Optionally, as an embodiment, before displaying the content tag set and style tag set of the first video to be edited, the method further includes:

获取预设数量的视频剪辑样本,其中,所述视频剪辑样本为经过视频剪辑处理的视频,每个视频剪辑样本中包含至少一个剪辑素材;Acquiring a preset number of video clip samples, wherein the video clip samples are videos processed by video clips, and each video clip sample includes at least one clip material;

提取每个视频剪辑样本中每个剪辑素材的至少一个素材特征,所述素材特征用于标识剪辑素材;extracting at least one material feature of each clip material in each video clip sample, where the material feature is used to identify the clip material;

获取每个视频剪辑样本的风格标签;Get style tags for each video clip sample;

将提取到的素材特征进行组合,并映射到对应的风格标签,得到每个风格标签对应的素材组合。The extracted material features are combined and mapped to corresponding style tags to obtain the material combination corresponding to each style tag.

可选地,作为一个实施例,所述将提取到的素材特征进行组合,并映射到对应的风格标签,得到每个风格标签对应的素材组合,包括:Optionally, as an embodiment, the extracted material features are combined and mapped to corresponding style tags to obtain a material combination corresponding to each style tag, including:

对于获取到的每个风格标签Pi下的N类素材特征,统计每类素材特征中每个素材特征的使用次数;For the acquired N types of material features under each style tag Pi, count the usage times of each material feature in each type of material feature;

确定每类素材特征中使用次数排在前M位的素材特征;Determine the top M material features in each type of material features;

将N类素材特征下的排在前M位的素材特征进行组合,得到MN个素材特征集合;Combining the top M material features under the N types of material features to obtain MN material feature sets;

计算每个素材特征集合的素材相关度,并将素材相关度排在前S位的素材特征集合映射到所述风格标签Pi,得到所述风格标签Pi对应的素材组合;Calculate the material relevancy of each material feature set, and map the material feature set whose material relevancy is ranked in the top S position to the style tag Pi to obtain the material combination corresponding to the style tag Pi;

其中,所述风格标签Pi为获取到的风格标签中的第i个风格标签,每个素材特征集合中包括N个素材特征、且各素材特征的类型不同,素材相关度为素材特征集合中所有素材特征的相关度,N、M和S均为大于1的整数。Wherein, the style tag Pi is the i-th style tag in the obtained style tags, each material feature set includes N material features, and the types of each material feature are different, and the material relevance is all the material features in the material feature set. The correlation degree of material features, N, M and S are all integers greater than 1.

可选地,作为一个实施例,所述将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频之后,还包括:Optionally, as an embodiment, after synthesizing the target video segment and the clip material in the target material combination to generate the second video, the method further includes:

接收用户对所述第二视频的第三输入;receiving a third user input on the second video;

响应于所述第三输入,为所述第二视频添加所述第三输入所输入的风格标签;in response to the third input, adding the style tag entered by the third input to the second video;

将所述第二视频确定为所述预设数量的视频剪辑样本之一。The second video is determined to be one of the preset number of video clip samples.

可选地,作为一个实施例,所述剪辑素材包括以下至少一项:音频素材、滤镜、转场效果、特效、字幕风格和镜头切换频率。Optionally, as an embodiment, the editing material includes at least one of the following: audio material, filter, transition effect, special effect, subtitle style, and shot switching frequency.

可选地,作为一个实施例,所述将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频,包括:Optionally, as an embodiment, generating the second video by synthesizing the target video clip and the clip material in the target material combination includes:

在所述目标素材组合中包括音频素材的情况下,提取所述音频素材的音波波动频率;In the case that audio material is included in the target material combination, extracting the sound wave fluctuation frequency of the audio material;

根据所述音波波动频率,生成至少一个视频镜头切换频率方式备选项并显示;According to the sound wave fluctuation frequency, at least one video lens switching frequency mode alternative is generated and displayed;

接收用户对所述至少一个视频镜头切换频率方式备选项中的目标视频镜头切换频率方式备选项的第四输入;receiving a fourth input from the user to the target video shot switching frequency mode alternative in the at least one video shot switching frequency mode alternative;

响应于所述第四输入,根据所述目标视频镜头切换频率方式备选项,将所述目标视频片段和所述目标素材组合中的剪辑素材进行合成,生成第二视频。In response to the fourth input, the target video clip and the clip material in the target material combination are synthesized according to the target video shot switching frequency mode option to generate a second video.

应理解的是,本发明实施例中,射频单元601可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器610处理;另外,将上行的数据发送给基站。通常,射频单元601包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元601还可以通过无线通信系统与网络和其他设备通信。It should be understood that, in this embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during sending and receiving of information or during a call. Specifically, after receiving the downlink data from the base station, it is processed by the processor 610; The uplink data is sent to the base station. Generally, the radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 601 can also communicate with the network and other devices through a wireless communication system.

电子设备通过网络模块602为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。The electronic device provides the user with wireless broadband Internet access through the network module 602, such as helping the user to send and receive emails, browse web pages, access streaming media, and the like.

音频输出单元603可以将射频单元601或网络模块602接收的或者在存储器609中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元603还可以提供与电子设备600执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元603包括扬声器、蜂鸣器以及受话器等。The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into audio signals and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the electronic device 600 (eg, call signal reception sound, message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.

输入单元604用于接收音频或视频信号。输入单元604可以包括图形处理器(Graphics Processing Unit,GPU)6041和麦克风6042,图形处理器6041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像可以显示在显示单元606上。经图形处理器6041处理后的图像可以存储在存储器609(或其它存储介质)中或者经由射频单元601或网络模块602进行发送。麦克风6042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元601发送到移动通信基站的格式输出。The input unit 604 is used to receive audio or video signals. The input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, and the graphics processor 6041 captures images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode data is processed. The processed image may be displayed on the display unit 606 . The image processed by the graphics processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602 . The microphone 6042 can receive sound and can process such sound into audio data. The processed audio data can be converted into a format that can be transmitted to a mobile communication base station via the radio frequency unit 601 for output in the case of a telephone call mode.

电子设备600还包括至少一种传感器605,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板6061的亮度,接近传感器可在电子设备600移动到耳边时,关闭显示面板6061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器605还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。The electronic device 600 also includes at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 6061 according to the brightness of the ambient light, and the proximity sensor can turn off the display panel 6061 and the display panel 6061 when the electronic device 600 is moved to the ear. / or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games , magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; the sensor 605 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors, etc., are not repeated here.

显示单元606用于显示由用户输入的信息或提供给用户的信息。显示单元606可包括显示面板6061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板6061。The display unit 606 is used to display information input by the user or information provided to the user. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.

用户输入单元607可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元607包括触控面板6071以及其他输入设备6072。触控面板6071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板6071上或在触控面板6071附近的操作)。触控面板6071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器610,接收处理器610发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板6071。除了触控面板6071,用户输入单元607还可以包括其他输入设备6072。具体地,其他输入设备6072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。The user input unit 607 may be used to receive input numerical or character information, and generate key signal input related to user setting and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072 . The touch panel 6071, also referred to as a touch screen, can collect the user's touch operations on or near it (such as the user's finger, stylus, etc., any suitable object or accessory on or near the touch panel 6071). operate). The touch panel 6071 may include two parts, a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch orientation, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it to the touch controller. To the processor 610, the command sent by the processor 610 is received and executed. In addition, the touch panel 6071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch panel 6071 , the user input unit 607 may also include other input devices 6072 . Specifically, other input devices 6072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.

进一步的,触控面板6071可覆盖在显示面板6061上,当触控面板6071检测到在其上或附近的触摸操作后,传送给处理器610以确定触摸事件的类型,随后处理器610根据触摸事件的类型在显示面板6061上提供相应的视觉输出。虽然在图6中,触控面板6071与显示面板6061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板6071与显示面板6061集成而实现电子设备的输入和输出功能,具体此处不做限定。Further, the touch panel 6071 can be covered on the display panel 6061. When the touch panel 6071 detects a touch operation on or near it, it transmits it to the processor 610 to determine the type of the touch event, and then the processor 610 determines the type of the touch event according to the touch The type of event provides a corresponding visual output on the display panel 6061. Although in FIG. 6 , the touch panel 6071 and the display panel 6061 are used as two independent components to realize the input and output functions of the electronic device, but in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated The implementation of the input and output functions of the electronic device is not specifically limited here.

接口单元608为外部装置与电子设备600连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元608可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备600内的一个或多个元件或者可以用于在电子设备600和外部装置之间传输数据。The interface unit 608 is an interface for connecting an external device to the electronic device 600 . For example, external devices may include wired or wireless headset ports, external power (or battery charger) ports, wired or wireless data ports, memory card ports, ports for connecting devices with identification modules, audio input/output (I/O) ports, video I/O ports, headphone ports, and more. The interface unit 608 may be used to receive input (eg, data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 600 or may be used between the electronic device 600 and external Transfer data between devices.

存储器609可用于存储软件程序以及各种数据。存储器609可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器609可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of the mobile phone (such as audio data, phone book, etc.), etc. Additionally, memory 609 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

处理器610是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器609内的软件程序和/或模块,以及调用存储在存储器609内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器610可包括一个或多个处理单元;优选地,处理器610可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器610中。The processor 610 is the control center of the electronic device, using various interfaces and lines to connect various parts of the entire electronic device, by running or executing the software programs and/or modules stored in the memory 609, and calling the data stored in the memory 609. , perform various functions of electronic equipment and process data, so as to monitor electronic equipment as a whole. The processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, etc., and the modem The processor mainly handles wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 610.

电子设备600还可以包括给各个部件供电的电源611(比如电池),优选地,电源611可以通过电源管理系统与处理器610逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The electronic device 600 may also include a power supply 611 (such as a battery) for supplying power to various components. Preferably, the power supply 611 may be logically connected to the processor 610 through a power management system, so as to manage charging, discharging, and power consumption management through the power management system and other functions.

另外,电子设备600包括一些未示出的功能模块,在此不再赘述。In addition, the electronic device 600 includes some functional modules not shown, which will not be repeated here.

优选地,本发明实施例还提供一种电子设备,包括处理器610,存储器609,存储在存储器609上并可在所述处理器610上运行的计算机程序,该计算机程序被处理器610执行时实现上述视频剪辑方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。Preferably, an embodiment of the present invention further provides an electronic device, including a processor 610, a memory 609, and a computer program stored in the memory 609 and running on the processor 610, when the computer program is executed by the processor 610 The various processes of the above video editing method embodiments are implemented, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.

本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述视频剪辑方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。Embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium. When the computer program is executed by a processor, each process of the foregoing video editing method embodiments can be implemented, and the same technology can be achieved. The effect, in order to avoid repetition, is not repeated here. The computer-readable storage medium is, for example, a read-only memory (Read-Only Memory, ROM for short), a random access memory (Random Access Memory, RAM for short), a magnetic disk, or an optical disk.

需要说明的是,在本说明书中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that, in this specification, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements , but also other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present invention can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk, CD), including several instructions to make a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present invention.

上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本发明的保护之内。The embodiments of the present invention have been described above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned specific embodiments, which are merely illustrative rather than restrictive. Under the inspiration of the present invention, without departing from the spirit of the present invention and the scope protected by the claims, many forms can be made, which all belong to the protection of the present invention.

Claims (10)

CN201910696203.1A2019-07-302019-07-30 A video editing method and electronic deviceActiveCN110381371B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910696203.1ACN110381371B (en)2019-07-302019-07-30 A video editing method and electronic device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910696203.1ACN110381371B (en)2019-07-302019-07-30 A video editing method and electronic device

Publications (2)

Publication NumberPublication Date
CN110381371Atrue CN110381371A (en)2019-10-25
CN110381371B CN110381371B (en)2021-08-31

Family

ID=68257088

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910696203.1AActiveCN110381371B (en)2019-07-302019-07-30 A video editing method and electronic device

Country Status (1)

CountryLink
CN (1)CN110381371B (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110797055A (en)*2019-10-292020-02-14北京达佳互联信息技术有限公司Multimedia resource synthesis method and device, electronic equipment and storage medium
CN110933460A (en)*2019-12-052020-03-27腾讯科技(深圳)有限公司 Video splicing method and device, and computer storage medium
CN110996138A (en)*2019-12-172020-04-10腾讯科技(深圳)有限公司Video annotation method, device and storage medium
CN111158492A (en)*2019-12-312020-05-15维沃移动通信有限公司Video editing method and head-mounted device
CN111491213A (en)*2020-04-172020-08-04维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
CN111491206A (en)*2020-04-172020-08-04维沃移动通信有限公司Video processing method, video processing device and electronic equipment
CN111541936A (en)*2020-04-022020-08-14腾讯科技(深圳)有限公司Video and image processing method and device, electronic equipment and storage medium
CN111629252A (en)*2020-06-102020-09-04北京字节跳动网络技术有限公司Video processing method and device, electronic equipment and computer readable storage medium
CN112508284A (en)*2020-12-102021-03-16网易(杭州)网络有限公司Display material preprocessing method, putting method, system, device and equipment
CN112883234A (en)*2021-02-182021-06-01北京明略昭辉科技有限公司Label data generation method and device, storage medium and electronic equipment
CN112887794A (en)*2021-01-262021-06-01维沃移动通信有限公司Video editing method and device
CN113010737A (en)*2021-03-252021-06-22腾讯科技(深圳)有限公司Video tag classification method and device and storage medium
CN113115055A (en)*2021-02-242021-07-13华数传媒网络有限公司User portrait and live video file editing method based on viewing behavior
CN113259708A (en)*2021-04-062021-08-13阿里健康科技(中国)有限公司Method, computer device and medium for introducing commodities based on short video
CN113365147A (en)*2021-08-112021-09-07腾讯科技(深圳)有限公司Video editing method, device, equipment and storage medium based on music card point
CN113395542A (en)*2020-10-262021-09-14腾讯科技(深圳)有限公司Video generation method and device based on artificial intelligence, computer equipment and medium
CN113923477A (en)*2021-09-302022-01-11北京百度网讯科技有限公司 Video processing method, device, electronic device, and storage medium
CN113938751A (en)*2020-06-292022-01-14北京字节跳动网络技术有限公司 Video transition type determination method, device and storage medium
CN114040248A (en)*2021-11-232022-02-11维沃移动通信有限公司 A video processing method, device and electronic device
CN114155454A (en)*2020-09-072022-03-08中国移动通信有限公司研究院Video processing method, device and storage medium
CN114173067A (en)*2021-12-212022-03-11科大讯飞股份有限公司Video generation method, device, equipment and storage medium
CN114245171A (en)*2021-12-152022-03-25百度在线网络技术(北京)有限公司Video editing method, video editing device, electronic equipment and media
CN114302253A (en)*2021-11-252022-04-08北京达佳互联信息技术有限公司Media data processing method, device, equipment and storage medium
CN114339399A (en)*2021-12-272022-04-12咪咕文化科技有限公司 Multimedia file editing method, device and computing device
CN114727031A (en)*2021-01-062022-07-08北京小米移动软件有限公司Video processing method, apparatus and medium
CN115022712A (en)*2022-05-202022-09-06北京百度网讯科技有限公司 Video processing method, apparatus, device and storage medium
CN115119050A (en)*2022-06-302022-09-27北京奇艺世纪科技有限公司Video clipping method and device, electronic equipment and storage medium
CN115278306A (en)*2022-06-202022-11-01阿里巴巴(中国)有限公司Video editing method and device
CN115567660A (en)*2022-02-282023-01-03荣耀终端有限公司 A video processing method and electronic device
CN116137648A (en)*2021-11-182023-05-19腾讯科技(深圳)有限公司 Video processing method, device, electronic device and computer-readable storage medium
US11678029B2 (en)2019-12-172023-06-13Tencent Technology (Shenzhen) Company LimitedVideo labeling method and apparatus, device, and computer-readable storage medium
CN116634058A (en)*2022-05-302023-08-22荣耀终端有限公司 A media resource editing method and electronic device
WO2023160515A1 (en)*2022-02-252023-08-31北京字跳网络技术有限公司Video processing method and apparatus, device and medium
CN116980765A (en)*2022-04-142023-10-31华为技术有限公司Video processing method, electronic equipment and medium
US12260881B2 (en)2020-06-292025-03-25Beijing Bytedance Network Technology Co., Ltd.Transition type determination method and apparatus, and electronic device and storage medium
WO2025087013A1 (en)*2023-10-272025-05-01北京字跳网络技术有限公司Editing processing method and apparatus, and electronic device and computer-readable storage medium
TWI883865B (en)*2024-03-142025-05-11台灣大哥大股份有限公司Video editing method and video editing system

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101390032A (en)*2006-01-052009-03-18眼点公司System and methods for storing, editing, and sharing digital video
US20090092374A1 (en)*2007-10-072009-04-09Kulas Charles JDigital Network-Based Video Tagging System
CN101901620A (en)*2010-07-282010-12-01复旦大学 Automatic generation and editing method and application of video content index file
CN106233707A (en)*2014-04-212016-12-14微软技术许可有限责任公司Alternatively make camera motion stylization
CN108769733A (en)*2018-06-222018-11-06三星电子(中国)研发中心Video clipping method and video clipping device
CN109002857A (en)*2018-07-232018-12-14厦门大学A kind of transformation of video style and automatic generation method and system based on deep learning
CN109121021A (en)*2018-09-282019-01-01北京周同科技有限公司A kind of generation method of Video Roundup, device, electronic equipment and storage medium
US20190034439A1 (en)*2010-07-132019-01-31Motionpoint CorporationDynamic language translation of web site content
CN109688463A (en)*2018-12-272019-04-26北京字节跳动网络技术有限公司A kind of editing video generation method, device, terminal device and storage medium
CN109819179A (en)*2019-03-212019-05-28腾讯科技(深圳)有限公司A kind of video clipping method and device
CN110019880A (en)*2017-09-042019-07-16优酷网络技术(北京)有限公司Video clipping method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101390032A (en)*2006-01-052009-03-18眼点公司System and methods for storing, editing, and sharing digital video
US20090092374A1 (en)*2007-10-072009-04-09Kulas Charles JDigital Network-Based Video Tagging System
US20190034439A1 (en)*2010-07-132019-01-31Motionpoint CorporationDynamic language translation of web site content
CN101901620A (en)*2010-07-282010-12-01复旦大学 Automatic generation and editing method and application of video content index file
CN106233707A (en)*2014-04-212016-12-14微软技术许可有限责任公司Alternatively make camera motion stylization
CN110019880A (en)*2017-09-042019-07-16优酷网络技术(北京)有限公司Video clipping method and device
CN108769733A (en)*2018-06-222018-11-06三星电子(中国)研发中心Video clipping method and video clipping device
CN109002857A (en)*2018-07-232018-12-14厦门大学A kind of transformation of video style and automatic generation method and system based on deep learning
CN109121021A (en)*2018-09-282019-01-01北京周同科技有限公司A kind of generation method of Video Roundup, device, electronic equipment and storage medium
CN109688463A (en)*2018-12-272019-04-26北京字节跳动网络技术有限公司A kind of editing video generation method, device, terminal device and storage medium
CN109819179A (en)*2019-03-212019-05-28腾讯科技(深圳)有限公司A kind of video clipping method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAN-SHENG HUA;LIE LU;HONG-JIANG ZHANG: "《Optimization-based automated home video editing system》", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》*
程远: "《基于内容的电影视频检索和精彩视频剪辑系统研究》", 《中国优秀硕士学位论文全文数据库》*

Cited By (59)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110797055A (en)*2019-10-292020-02-14北京达佳互联信息技术有限公司Multimedia resource synthesis method and device, electronic equipment and storage medium
CN110933460A (en)*2019-12-052020-03-27腾讯科技(深圳)有限公司 Video splicing method and device, and computer storage medium
CN110933460B (en)*2019-12-052021-09-07腾讯科技(深圳)有限公司 Video splicing method and device, and computer storage medium
CN113992942B (en)*2019-12-052024-10-22腾讯科技(深圳)有限公司Video stitching method and device and computer storage medium
CN113992942A (en)*2019-12-052022-01-28腾讯科技(深圳)有限公司Video splicing method and device and computer storage medium
CN110996138B (en)*2019-12-172021-02-05腾讯科技(深圳)有限公司Video annotation method, device and storage medium
CN110996138A (en)*2019-12-172020-04-10腾讯科技(深圳)有限公司Video annotation method, device and storage medium
US11678029B2 (en)2019-12-172023-06-13Tencent Technology (Shenzhen) Company LimitedVideo labeling method and apparatus, device, and computer-readable storage medium
CN111158492A (en)*2019-12-312020-05-15维沃移动通信有限公司Video editing method and head-mounted device
CN111541936A (en)*2020-04-022020-08-14腾讯科技(深圳)有限公司Video and image processing method and device, electronic equipment and storage medium
CN111491213B (en)*2020-04-172022-03-08维沃移动通信有限公司Video processing method, video processing device and electronic equipment
CN111491206A (en)*2020-04-172020-08-04维沃移动通信有限公司Video processing method, video processing device and electronic equipment
CN111491213A (en)*2020-04-172020-08-04维沃移动通信有限公司 Video processing method, video processing device and electronic equipment
KR20230016049A (en)*2020-06-102023-01-31베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Video processing method and device, electronic device, and computer readable storage medium
US12068006B2 (en)2020-06-102024-08-20Beijing Bytedance Network Technology Co., Ltd.Video processing method and apparatus, electronic device, and computer readable storage medium
KR102575848B1 (en)2020-06-102023-09-06베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Video processing method and device, electronic device, and computer readable storage medium
WO2021249168A1 (en)*2020-06-102021-12-16北京字节跳动网络技术有限公司Video processing method and apparatus, electronic device, and computer readable storage medium
CN111629252B (en)*2020-06-102022-03-25北京字节跳动网络技术有限公司Video processing method and device, electronic equipment and computer readable storage medium
CN111629252A (en)*2020-06-102020-09-04北京字节跳动网络技术有限公司Video processing method and device, electronic equipment and computer readable storage medium
CN113938751A (en)*2020-06-292022-01-14北京字节跳动网络技术有限公司 Video transition type determination method, device and storage medium
CN113938751B (en)*2020-06-292023-12-22抖音视界有限公司Video transition type determining method, device and storage medium
US12260881B2 (en)2020-06-292025-03-25Beijing Bytedance Network Technology Co., Ltd.Transition type determination method and apparatus, and electronic device and storage medium
CN114155454A (en)*2020-09-072022-03-08中国移动通信有限公司研究院Video processing method, device and storage medium
CN113395542A (en)*2020-10-262021-09-14腾讯科技(深圳)有限公司Video generation method and device based on artificial intelligence, computer equipment and medium
CN112508284B (en)*2020-12-102024-07-12网易(杭州)网络有限公司Display material pretreatment method, display material throwing system, display material throwing device and display material pretreatment equipment
CN112508284A (en)*2020-12-102021-03-16网易(杭州)网络有限公司Display material preprocessing method, putting method, system, device and equipment
CN114727031A (en)*2021-01-062022-07-08北京小米移动软件有限公司Video processing method, apparatus and medium
CN112887794A (en)*2021-01-262021-06-01维沃移动通信有限公司Video editing method and device
CN112883234A (en)*2021-02-182021-06-01北京明略昭辉科技有限公司Label data generation method and device, storage medium and electronic equipment
CN113115055B (en)*2021-02-242022-08-05华数传媒网络有限公司User portrait and live video file editing method based on viewing behavior
CN113115055A (en)*2021-02-242021-07-13华数传媒网络有限公司User portrait and live video file editing method based on viewing behavior
CN113010737A (en)*2021-03-252021-06-22腾讯科技(深圳)有限公司Video tag classification method and device and storage medium
CN113010737B (en)*2021-03-252024-04-30腾讯科技(深圳)有限公司Video tag classification method, device and storage medium
CN113259708A (en)*2021-04-062021-08-13阿里健康科技(中国)有限公司Method, computer device and medium for introducing commodities based on short video
CN113365147A (en)*2021-08-112021-09-07腾讯科技(深圳)有限公司Video editing method, device, equipment and storage medium based on music card point
CN113923477A (en)*2021-09-302022-01-11北京百度网讯科技有限公司 Video processing method, device, electronic device, and storage medium
CN116137648A (en)*2021-11-182023-05-19腾讯科技(深圳)有限公司 Video processing method, device, electronic device and computer-readable storage medium
CN114040248A (en)*2021-11-232022-02-11维沃移动通信有限公司 A video processing method, device and electronic device
CN114302253B (en)*2021-11-252024-03-12北京达佳互联信息技术有限公司Media data processing method, device, equipment and storage medium
CN114302253A (en)*2021-11-252022-04-08北京达佳互联信息技术有限公司Media data processing method, device, equipment and storage medium
CN114245171A (en)*2021-12-152022-03-25百度在线网络技术(北京)有限公司Video editing method, video editing device, electronic equipment and media
CN114245171B (en)*2021-12-152023-08-29百度在线网络技术(北京)有限公司Video editing method and device, electronic equipment and medium
US12387762B2 (en)2021-12-152025-08-12Baidu Online Network Technology (Beijing) Co., Ltd.Video editing method and apparatus, electronic device and medium
CN114173067A (en)*2021-12-212022-03-11科大讯飞股份有限公司Video generation method, device, equipment and storage medium
CN114339399B (en)*2021-12-272024-12-13咪咕文化科技有限公司 Multimedia file editing method, device and computing equipment
CN114339399A (en)*2021-12-272022-04-12咪咕文化科技有限公司 Multimedia file editing method, device and computing device
WO2023160515A1 (en)*2022-02-252023-08-31北京字跳网络技术有限公司Video processing method and apparatus, device and medium
CN115567660A (en)*2022-02-282023-01-03荣耀终端有限公司 A video processing method and electronic device
CN116980765A (en)*2022-04-142023-10-31华为技术有限公司Video processing method, electronic equipment and medium
CN115022712A (en)*2022-05-202022-09-06北京百度网讯科技有限公司 Video processing method, apparatus, device and storage medium
CN115022712B (en)*2022-05-202023-12-29北京百度网讯科技有限公司Video processing method, device, equipment and storage medium
CN116634058B (en)*2022-05-302023-12-22荣耀终端有限公司Editing method of media resources, electronic equipment and readable storage medium
CN116634058A (en)*2022-05-302023-08-22荣耀终端有限公司 A media resource editing method and electronic device
CN115278306B (en)*2022-06-202024-05-31阿里巴巴(中国)有限公司Video editing method and device
CN115278306A (en)*2022-06-202022-11-01阿里巴巴(中国)有限公司Video editing method and device
CN115119050A (en)*2022-06-302022-09-27北京奇艺世纪科技有限公司Video clipping method and device, electronic equipment and storage medium
CN115119050B (en)*2022-06-302023-12-15北京奇艺世纪科技有限公司Video editing method and device, electronic equipment and storage medium
WO2025087013A1 (en)*2023-10-272025-05-01北京字跳网络技术有限公司Editing processing method and apparatus, and electronic device and computer-readable storage medium
TWI883865B (en)*2024-03-142025-05-11台灣大哥大股份有限公司Video editing method and video editing system

Also Published As

Publication numberPublication date
CN110381371B (en)2021-08-31

Similar Documents

PublicationPublication DateTitle
CN110381371A (en)A kind of video clipping method and electronic equipment
CN109558512B (en)Audio-based personalized recommendation method and device and mobile terminal
CN112689201B (en)Barrage information identification method, barrage information display method, server and electronic equipment
CN107958042B (en) A kind of push method and mobile terminal of target topic
CN110830362B (en) A method and mobile terminal for generating content
CN108460817B (en)Jigsaw puzzle method and mobile terminal
CN108334196B (en)File processing method and mobile terminal
CN111491123A (en)Video background processing method and device and electronic equipment
CN107846352A (en)A kind of method for information display, mobile terminal
CN110557683A (en)Video playing control method and electronic equipment
CN110544287B (en)Picture allocation processing method and electronic equipment
CN110955788A (en)Information display method and electronic equipment
CN111491211A (en)Video processing method, video processing device and electronic equipment
CN108470055A (en)A kind of display methods and mobile terminal of text message
WO2021104175A1 (en)Information processing method and apparatus
CN108765522B (en) A dynamic image generation method and mobile terminal
CN109670105B (en)Searching method and mobile terminal
CN111143614A (en) Video display method and electronic device
CN109510897B (en)Expression picture management method and mobile terminal
CN108459813A (en)A kind of searching method and mobile terminal
CN108494949B (en) An image classification method and mobile terminal
WO2021073295A1 (en)Content processing method and electronic device
CN107885887B (en) A file storage method and mobile terminal
CN107734049B (en) Method, device and mobile terminal for downloading network resources
CN110674294A (en) A kind of similarity determination method and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20220822

Address after:5 / F, building B, No. 25, Andemen street, Yuhuatai District, Nanjing City, Jiangsu Province, 210012

Patentee after:NANJING WEIWO SOFTWARE TECHNOLOGY CO.,LTD.

Address before:523860 No. 283 BBK Avenue, Changan Town, Changan, Guangdong.

Patentee before:VIVO MOBILE COMMUNICATION Co.,Ltd.


[8]ページ先頭

©2009-2025 Movatter.jp