Movatterモバイル変換


[0]ホーム

URL:


CN105120336A - Information processing method and electronic instrument - Google Patents

Information processing method and electronic instrument
Download PDF

Info

Publication number
CN105120336A
CN105120336ACN201510614251.3ACN201510614251ACN105120336ACN 105120336 ACN105120336 ACN 105120336ACN 201510614251 ACN201510614251 ACN 201510614251ACN 105120336 ACN105120336 ACN 105120336A
Authority
CN
China
Prior art keywords
video
information
front cover
output parameter
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510614251.3A
Other languages
Chinese (zh)
Inventor
王少敏
王洪
雷闪耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing LtdfiledCriticalLenovo Beijing Ltd
Priority to CN201510614251.3ApriorityCriticalpatent/CN105120336A/en
Publication of CN105120336ApublicationCriticalpatent/CN105120336A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The present invention discloses an information processing method and an electronic instrument. The method includes the steps as follows: acquiring information of at least one video related to the content of a first video; and editing the first video based on the information of at least one video so as to acquire the first video with a first presentation effect. The information processing method and the electronic instrument solve a technical problem that an electronic instrument in the existing technology has a low intelligent degree for video editing, and realize a technical effect of an improvement of the intelligent degree for video editing.

Description

A kind of information processing method and electronic equipment
Technical field
The present invention relates to electronic technology field, particularly a kind of information processing method and electronic equipment.
Background technology
Along with the development of science and technology, electronic technology have also been obtained development at full speed, a lot of electronic equipment, as mobile phone, panel computer etc., becomes the necessity of people's daily life.In order to meet the user demand of user, variously can to take and the application software of editing video is arisen at the historic moment, e.g., beautifully to clap, micro-ly to look, second beats etc.
In the prior art, after user uses application software capture video, when editing the background music of video, user or the primary sound that carries when using capture video, or after removing video primary sound, manually select the music oneself liked; When the picture style of user to video is edited, user must manually select suitable style to beautify picture from the picture style preset.
Because electronic equipment of the prior art can only be edited video based on the selection operation of user, so electronic equipment of the prior art exists the low technical problem of the intelligence degree of video editing.
Summary of the invention
The embodiment of the present application provides a kind of information processing method and electronic equipment, there is the low technical problem of the intelligence degree of video editing, realizing the technique effect of the intelligence degree improving video editing for solving electronic equipment of the prior art.
The embodiment of the present application provides a kind of information processing method on the one hand, comprises the following steps:
Obtain at least one video information relevant to the video content of the first video;
Based at least one video information described, described first video is edited, obtain and there is the first video that first presents effect.
Optionally, at least one video information that described acquisition is relevant to the video content of the first video, comprising:
Obtain at least one key frame of video at least partially in described first video;
Discriminance analysis is carried out at least one key frame described;
Obtain at least one video information relevant to the video content of described first video.
Optionally, described discriminance analysis is carried out at least one key frame described, comprising:
Content recognition is carried out at least one key frame described, analyzes at least one object at least one key frame described;
Correspondingly, at least one video information that described acquisition is relevant to the video content of described first video, comprising:
Based at least one object described, obtain the first corresponding scene information with described first video.
Optionally, described discriminance analysis is carried out at least one key frame described, comprising:
Colour recognition is carried out at least one key frame described, analyzes at least one picture color feature of at least one key frame described;
Correspondingly, at least one video information that described acquisition is relevant to the video content of described first video, comprising:
Based at least one picture color feature described, obtain the first tone of described first video.
Optionally, at least one video information that described acquisition is relevant to the video content of the first video, comprising:
Obtain the first name information of described first video, wherein, it is manually the information of described first video interpolation that described first name information is specially user.
Optionally, described based at least one video information described, described first video is edited, obtains and there is the first video that first presents effect, comprising:
Determine and the first audio frequency that at least one video information described matches;
For described first video adds described first audio frequency music as a setting;
The first video that it is background music that acquisition has with described first audio frequency.
Optionally, describedly to determine and the first audio frequency that at least one video information described matches, comprising:
Search for preset audio resource;
The first audio frequency matched with at least one video information described is chosen from described preset audio resource.
Optionally, describedly to determine and the first audio frequency that at least one video information described matches, specifically comprise:
Search for preset video resource;
Determine from described preset video resource and the second video that at least one video information described matches;
The music extracted in described second video is the first audio frequency.
Optionally, described second video is comprise the video of the content be associated with described first scene information or described second video is the video with the tone be associated with described first tone.
Optionally, described based at least one video information described, described first video to be edited is edited, obtains and there is the first video that first presents effect, comprising:
Determine M output parameter value of M the output parameter relevant with presenting effect matched at least one video information described;
The value arranging described M output parameter of described first video is a described M output parameter value;
Obtain first video with a described M output parameter value.
Optionally, described M the output parameter value determining M the output parameter relevant with presenting effect matched at least one video information described, comprising:
Obtain N number of presetting and present N number of dominant hue corresponding to effect;
Based at least one video information described and described N number of dominant hue, present first effect and present in effect from described N number of presetting and select M output parameter value, wherein, described first present that effect is specially that dominant hue and at least one video information described match present effect.
Optionally, described M the output parameter value determining M the output parameter relevant with presenting effect matched at least one video information described, comprising:
Search for described preset video resource;
The 3rd video matched with at least one video information described is determined from described preset video resource;
The value extracting described M output parameter in described 3rd video is a described M output parameter value.
Optionally, described 3rd video is comprise the video of the content be associated with described first scene information or described 3rd video is the video with the tone be associated with described first tone.
Optionally, described based at least one video information described, described first video to be edited is edited, obtains and there is the first video that first presents effect, comprising:
Determine and the first front cover that at least one video information described matches;
For described first video adds described first front cover as video front cover;
The first video that it is video front cover that acquisition has with described first front cover.
Optionally, describedly to determine and the first front cover that at least one video information described matches, comprising:
Search for preset front cover template;
The first front cover matched with at least one video information described is chosen from described preset front cover template.
Optionally, describedly to determine and the first front cover that at least one video information described matches, comprising:
Search for described preset video resource;
The 4th video matched with at least one video information described is determined from described preset video resource;
The front cover extracted in described 4th video is the first front cover.
Optionally, described 4th video is comprise the video of the content be associated with described first scene information or described 3rd video is the video with the tone be associated with described first tone.
The embodiment of the present application provides a kind of electronic equipment on the other hand, comprising:
First acquiring unit, for obtaining at least one video information relevant to the video content of the first video;
First edit cell, for based at least one video information described, edits described first video, obtains and has the first video that first presents effect.
The embodiment of the present application additionally provides a kind of electronic equipment, comprising:
Housing;
Processor, is arranged in described housing;
Wherein, described processor is for obtaining at least one video information relevant to the video content of the first video; Based at least one video information described, described first video is edited, obtain and there is the first video that first presents effect.
Optionally, described processor is used for:
Obtain at least one key frame of video at least partially in described first video;
Discriminance analysis is carried out at least one key frame described;
Obtain at least one video information relevant to the video content of described first video.
Optionally, described processor is used for:
Content recognition is carried out at least one key frame described, analyzes at least one object at least one key frame described;
Based at least one object described, obtain the first corresponding scene information with described first video.
Optionally, described processor is used for:
Colour recognition is carried out at least one key frame described, analyzes at least one picture color feature of at least one key frame described;
Based at least one picture color feature described, obtain the first tone of described first video.
Optionally, described processor is used for:
Obtain the first name information of described first video, wherein, it is manually the information of described first video interpolation that described first name information is specially user.
Optionally, described processor is used for:
Determine and the first audio frequency that at least one video information described matches;
For described first video adds described first audio frequency music as a setting;
The first video that it is background music that acquisition has with described first audio frequency.
Optionally, described processor is used for:
Search for preset audio resource;
The first audio frequency matched with at least one video information described is chosen from described preset audio resource.
Optionally, described processor is used for:
Search for preset video resource;
Determine from described preset video resource and the second video that at least one video information described matches;
The music extracted in described second video is the first audio frequency.
Optionally, described second video is comprise the video of the content be associated with described first scene information or described second video is the video with the tone be associated with described first tone.
Optionally, described processor is used for:
Determine M output parameter value of M the output parameter relevant with presenting effect matched at least one video information described;
The value arranging described M output parameter of described first video is a described M output parameter value;
Obtain first video with a described M output parameter value.
Optionally, described processor is used for:
Obtain N number of presetting and present N number of dominant hue corresponding to effect;
Based at least one video information described and described N number of dominant hue, present first effect and present in effect from described N number of presetting and select M output parameter value, wherein, described first present that effect is specially that dominant hue and at least one video information described match present effect.
Optionally, described processor is used for:
Search for described preset video resource;
The 3rd video matched with at least one video information described is determined from described preset video resource;
The value extracting described M output parameter in described 3rd video is a described M output parameter value.
Optionally, described processor is used for:
Determine and the first front cover that at least one video information described matches;
For described first video adds described first front cover as video front cover;
The first video that it is video front cover that acquisition has with described first front cover.
Optionally, described processor is used for:
Search for preset front cover template;
The first front cover matched with at least one video information described is chosen from described preset front cover template.
Optionally, described processor is used for:
Search for described preset video resource;
The 4th video matched with at least one video information described is determined from described preset video resource;
The front cover extracted in described 4th video is the first front cover.
Above-mentioned one or more technical scheme in the embodiment of the present application, at least has one or more technique effects following:
One, due to the technical scheme in the embodiment of the present application, adopt and obtain at least one video information relevant to the video content of the first video, based at least one video information described, described first video is edited, obtain and there is the technological means that first presents the first video of effect, like this, when user is when editing the first video, electronic equipment can automatically based on the information that video content is relevant, be that the first video interpolation presents effect accordingly, do not need to go the manual operation based on user to edit video again, so, efficiently solve and solve electronic equipment of the prior art and there is the low technical problem of the intelligence degree of video editing, realize the technique effect of the intelligence degree improving video editing.
Two, due to the technical scheme in the embodiment of the present application, at least one key frame obtaining video at least partially in described first video is adopted; Discriminance analysis is carried out at least one key frame described; Obtain the technological means of at least one video information relevant to the video content of described first video, like this, when electronic equipment is edited the first video automatically, a part of key frame that electronic equipment only need extract video is analyzed, and do not need to carry out analyzing and processing to whole video, and then substantially reduce the amount of calculation of electronic equipment, achieve the technique effect improving processing speed.
Three, due to the technical scheme in the embodiment of the present application, adopt and determine and the first audio frequency that at least one video information described matches; For described first video adds described first audio frequency music as a setting; Obtain the technological means had with described first audio frequency first video that is background music, like this, when electronic equipment is edited the first video automatically, electronic equipment can automatically according to the video information relevant to the first video obtained, be the background music that the first video adds coupling, achieve the technique effect of the background music automatically adding coupling when video editing for video.
Four, due to the technical scheme in the embodiment of the present application, M the output parameter value determining M the output parameter relevant with presenting effect matched at least one video information described is adopted; The value arranging described M output parameter of described first video is a described M output parameter value; Obtain the technological means with the first video of a described M output parameter value, like this, when electronic equipment is edited the first video automatically, electronic equipment can automatically according to the video information relevant to the first video obtained, be first video add coupling present effect, as display styles, contrast, saturation etc., achieve when video editing automatically for video adds the technique effect presenting effect of coupling.
Five, due to the technical scheme in the embodiment of the present application, adopt and determine and the first front cover that at least one video information described matches; For described first video adds described first front cover as video front cover; Obtain the technological means had with described first front cover first video that is video front cover, like this, when electronic equipment is edited the first video automatically, electronic equipment can automatically according to the video information relevant to the first video obtained, be the video front cover that the first video adds coupling, achieve the technique effect of the video front cover automatically adding coupling when video editing for video.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present application or technical scheme of the prior art, be briefly described by the accompanying drawing used required in describing embodiment below, apparently, the accompanying drawing in the following describes is only some embodiments of the present invention.
A kind of flow chart of information processing method of Fig. 1 for providing in the embodiment of the present application one;
Fig. 2 is the first the specific implementation flow chart of step S101 in the embodiment of the present application one;
Fig. 3 is the first the specific implementation flow chart of step S102 in the embodiment of the present application one;
Fig. 4 is the first the specific implementation flow chart of step S10211 in the embodiment of the present application one;
Fig. 5 is the second specific implementation flow chart of step S10211 in the embodiment of the present application one;
Fig. 6 is the second specific implementation flow chart of step S102 in the embodiment of the present application one;
Fig. 7 is the first the specific implementation flow chart of step S10221 in the embodiment of the present application one;
Fig. 8 is the second specific implementation flow chart of step S10221 in the embodiment of the present application one;
Fig. 9 is the third specific implementation flow chart of step S102 in the embodiment of the present application one;
Figure 10 is the first the specific implementation flow chart of step S10231 in the embodiment of the present application one;
Figure 11 is the second specific implementation flow chart of step S10231 in the embodiment of the present application one;
A kind of structured flowchart of electronic equipment of Figure 12 for providing in the embodiment of the present application two;
A kind of schematic diagram of electronic equipment of Figure 13 for providing in the embodiment of the present application three.
Embodiment
The embodiment of the present application provides a kind of information processing method and electronic equipment, there is the low technical problem of the intelligence degree of video editing, realizing the technique effect of the intelligence degree improving video editing for solving electronic equipment of the prior art.
Technical scheme in the embodiment of the present application is solve above-mentioned technical problem, and general thought is as follows:
Obtain at least one video information relevant to the video content of the first video;
Based at least one video information described, described first video is edited, obtain and there is the first video that first presents effect.
In technique scheme, adopt and obtain at least one video information relevant to the video content of the first video, based at least one video information described, described first video is edited, obtain and there is the technological means that first presents the first video of effect, like this, when user is when editing the first video, electronic equipment can automatically based on the information that video content is relevant, be that the first video interpolation presents effect accordingly, do not need to go the manual operation based on user to edit video again, so, efficiently solve and solve electronic equipment of the prior art and there is the low technical problem of the intelligence degree of video editing, realize the technique effect of the intelligence degree improving video editing.
In order to better understand technique scheme, below by accompanying drawing and specific embodiment, technical solution of the present invention is described in detail, the specific features being to be understood that in the embodiment of the present application and embodiment is the detailed description to technical solution of the present invention, instead of the restriction to technical solution of the present invention, when not conflicting, the technical characteristic in the embodiment of the present application and embodiment can be combined with each other.
Embodiment one
Please refer to Fig. 1, be the flow chart of a kind of information processing method that the embodiment of the present application one provides, described method comprises the steps:
S101: obtain at least one video information relevant to the video content of the first video;
S102: based at least one video information described, edits described first video, obtains and has the first video that first presents effect.
In specific implementation process, described information processing method specifically can be applied in smart mobile phone, notebook computer, panel computer, also can be the electronic equipment that other can be edited video, and at this, just differing one schematically illustrates.In the embodiment of the present application, by being applied in the video editor of notebook computer for described information processing method, the implementation procedure of the method in the embodiment of the present application is described in detail.
When adopting the technical scheme in the application to carry out information processing, first perform step S101, that is: obtain at least one video information relevant to the video content of the first video.
In the embodiment of the present application one, the specific implementation of step S101 has the following two kinds situation:
First kind of way, please refer to Fig. 2, and the specific implementation of step S101 is:
S1011: at least one key frame obtaining video at least partially in described first video;
S1012: discriminance analysis is carried out at least one key frame described;
S1013: obtain at least one video information relevant to the video content of described first video.
Under a first technique, the processing mode of step S101 in specific implementation process also has the following two kinds situation:
The specific implementation of situation A, step S1012 is: carry out content recognition at least one key frame described, analyzes at least one object at least one key frame described;
Correspondingly, the specific implementation of step S1013 is: based at least one object described, obtains the first corresponding scene information with described first video.
In specific implementation process, the video editor being applied in notebook computer is in this way example, when user uses notebook computer to edit video, as, the video that user takes when eating hogmanay dinner the Spring Festival with notebook computer editor, after opening video, the video editor of notebook computer just can a part of video content of automatic acquisition hogmanay dinner video, it can be specifically the centre part of three minutes of hogmanay dinner video, also can be the beginning part of three minutes of hogmanay dinner video, also can be whole video, those skilled in the art can determine the video of the section sometime of certain part specifically obtaining video according to actual needs, be not restricted in the embodiment of the present application.For the centre partial video of three minutes of hogmanay dinner video, after video editor gets the centre video of three minutes of hogmanay dinner video, just the key frame images of middle three minutes videos is extracted, as, a two field picture is extracted every 30s, then now just can obtain 6 key frame images, certainly, the time interval of each key frame images can be set according to actual needs by those skilled in the art.Then 6 key frame images extracted are identified, obtain the object in every width image, as having roast duck in the 1st two field picture, there is fish in 2nd two field picture, in the 3rd two field picture, have bean curd, in the 4th two field picture, have chicken, Chinese cabbage is had in 5th two field picture, have rice in 6th two field picture, then video editor is based on 6 objects obtained, and determines that the scene of current video is family reunion dinner scene.
The specific implementation of situation B, step S1012 is: carry out colour recognition at least one key frame described, analyzes at least one picture color feature of at least one key frame described;
Correspondingly, the specific implementation of step S1013 is: based at least one picture color feature described, obtain the first tone of described first video.
In specific implementation process, continue to use above-mentioned example, after video editor gets the centre video of three minutes of hogmanay dinner video, just the key frame images of middle three minutes videos is extracted, as, extract a two field picture every 30s, then now just can obtain 6 key frame images, certainly, the time interval of each key frame images can be set according to actual needs by those skilled in the art.Then 6 key frame images extracted are identified, obtain the main color in every width image, concrete, can be obtain all colours in every width image, then determine to account in the picture ratio maximum color is the main color of image, it can certainly be other mode, be not restricted in this application, continue to use above-mentioned example, as, 1st two field picture is roast duck, and main color is red; 2nd two field picture is fish, and main color is cyan; 3rd two field picture is bean curd, and main color is white; 4th two field picture is chicken, and main color is white; Have Chinese cabbage in 5th two field picture, main color is white; Have rice in 6th two field picture, main color is white, then video editor is based on the main color of 6 key frame images obtained, and determines that the tone of current video is for white.
The second way, the specific implementation of step S101 is:
Obtain the first name information of described first video, wherein, it is manually the information of described first video interpolation that described first name information is specially user.
In specific implementation process, continue to use above-mentioned example, after the video editor acquisition user of user's notebook computer wants editing video, just the name information of automatic acquisition video, as, user gives this video called after " hogmanay dinner " then, and the name information that video editor just obtains video is hogmanay dinner.If video editor detects the called after " 1.rmvb " of video, then, can reminding user be that video renames, then obtain the name information of user's input.Certainly, other modes also can be had to obtain the name information of video, be not restricted in the embodiment of the present application.
After complete step S101, the method in the embodiment of the present application just performs step S102, that is: based at least one video information described, edit described first video, obtains and has the first video that first presents effect.
In the embodiment of the present application, the specific implementation of step S102 has following three kinds of situations:
First kind of way, please refer to Fig. 3, and the specific implementation of step S102 is:
S10211: determine and the first audio frequency that at least one video information described matches;
S10212: for described first video adds described first audio frequency music as a setting;
S10213: the first video that it is background music that acquisition has with described first audio frequency.
In the embodiment of the present application, the specific implementation of step S10211 has the following two kinds situation:
Situation A, please refer to Fig. 4:
S1021111: search for preset audio resource;
S1021112: choose the first audio frequency matched with at least one video information described from described preset audio resource.
In specific implementation process, continue to use above-mentioned example, after video editor obtains the relevant information of video, just automatically for video adds background music.For the hogmanay dinner video related information obtained for family reunion dinner scene, video editor determines that the keyword be associated with family reunion dinner scene is " family ", " reunion ", then all music stored in notebook computer are searched for, as, the music stored in notebook computer has " you be my eye ", " going home often ", " rose " three first song, then, mate according to the title of the song of keyword to three first songs, determine that the song matched with keyword is " going home often ", and then determine " going home often " first audio frequency for matching with hogmanay dinner video.When the relevant information of hogmanay dinner video obtained is the dominant hue of the title of video or video, adopting uses the same method obtains the first audio frequency of coupling, repeats no more in the embodiment of the present application.
Situation B, please refer to Fig. 5:
S1021121: search for preset video resource;
S1021122: determine from described preset video resource and the second video that at least one video information described matches;
S1021123: the music extracted in described second video is the first audio frequency.
In the embodiment of the present application, described second video is comprise the video of the content be associated with described first scene information or described second video is the video with the tone be associated with described first tone.
In specific implementation process, continue to use above-mentioned example, after video editor obtains the relevant information of video, just automatically for video adds background music.For the hogmanay dinner video related information obtained for family reunion dinner scene, the video resource stored in video editor search notebook computer, adopts the method in the embodiment of the present application to determine the video be associated with family reunion dinner scene in the video resource stored.As, the video stored in notebook computer is " China on the tip of the tongue ", " my one day " and " I likes my family ", because object topmost in family reunion dinner scene is food and household, food is comprised in " China on the tip of the tongue ", family and household is comprised in " I likes my family " video, therefore, video editor is determined " China on the tip of the tongue " and " I likes my family " is the video matched with hogmanay dinner video, now, video editor can generate information reminding user, user is allowed to determine the video that a user thinks from two videos, certainly, also can be the video that video editor is selected according to matching degree more to mate, owing to occurring in hogmanay dinner video that maximum objects is food, then now, video editor is then determined automatically " China on the tip of the tongue " is the highest video of matching degree, then the background music extracted in " China on the tip of the tongue " video is first music.When all not finding the video matched with hogmanay dinner video after all videos stored in video editor search notebook computer, video editor also can go online search automatically, the keyword of search can for the keyword be associated with video scene, as " hogmanay dinner " " reunion " etc., certainly, those skilled in the art also can adopt other modes to determine the keyword searched for, and are not restricted in the embodiment of the present application.When the relevant information of hogmanay dinner video obtained is the dominant hue of the title of video or video, adopting uses the same method obtains the first audio frequency of coupling, repeats no more in the embodiment of the present application.
The second way, please refer to Fig. 6, and the specific implementation of step S102 is:
S10221: M the output parameter value determining M the output parameter relevant with presenting effect matched at least one video information described;
S10222: the value arranging described M output parameter of described first video is a described M output parameter value;
S10223: obtain first video with a described M output parameter value.
In the embodiment of the present application, the specific implementation of step S10221 has the following two kinds situation:
Situation A, please refer to Fig. 7:
S1022111: obtain N number of presetting and present N number of dominant hue corresponding to effect;
S1022112: based at least one video information described and described N number of dominant hue, present first effect and present in effect from described N number of presetting and select M output parameter value, wherein, described first present that effect is specially that dominant hue and at least one video information described match present effect.
In specific implementation process, continue to use above-mentioned example, after video editor obtains the relevant information of video, just automatically present effect for video adds, e.g., the display styles (little pure and fresh style, pseudo-classic style etc.) of video, contrast, saturation etc.With the hogmanay dinner video related information obtained for mass-tone is adjusted to white, automatically for the display styles of video interpolation video is example, first video editor obtains all display styles prestored in notebook computer, as there being five kinds of display styles that prestore in notebook computer, be respectively: little pure and fresh style, pseudo-classic style, bluish white style, forest style, black and white style, then the dominant hue of each display styles is determined, as, the mass-tone of little pure and fresh style is adjusted to light blue, the mass-tone of pseudo-classic style is adjusted to aquamarine, the mass-tone of bluish white style is adjusted to off-white color, the mass-tone of forest style is adjusted to faint yellow, the mass-tone of black and white style is adjusted to grey, then the dominant hue of the dominant hue of five kinds of display styles and hogmanay dinner video is contrasted, obtain the immediate display styles of dominant hue of dominant hue and hogmanay dinner video, because the mass-tone of hogmanay dinner video is adjusted to white, and then determine that the bluish white style that mass-tone is adjusted to off-white color is the display styles mated most, and then obtain the display parameters of bluish white style, as exposure, tonal scale, sharpness etc.When the relevant information of the hogmanay dinner video obtained is the scene of video, adopting uses the same method obtains the output parameter of coupling, as, first video editor determines that the keyword of family reunion dinner scene is for " warmth ", " celebrating ", then corresponding display parameters should be warm tones, then from the display styles prestored, determine that mass-tone is adjusted to warm-toned display styles, and then obtain output parameter, other matching way can certainly be adopted, be not restricted in this application.When the relevant information of the hogmanay dinner video obtained is the title of video, also adopts identical method as described above, repeat no more in the embodiment of the present application.
Situation B, please refer to Fig. 8:
S1022121: search for described preset video resource;
S1022122: determine the 3rd video matched with at least one video information described from described preset video resource;
S1022123: the value extracting described M output parameter in described 3rd video is a described M output parameter value.
In the embodiment of the present application, described 3rd video is comprise the video of the content be associated with described first scene information or described 3rd video is the video with the tone be associated with described first tone.
In specific implementation process, continue to use above-mentioned example, after video editor obtains the relevant information of video, just automatically present effect for video adds, e.g., the display styles (little pure and fresh style, pseudo-classic style etc.) of video, contrast, saturation etc.For the hogmanay dinner video related information obtained be family reunion dinner scene, automatically for the display styles of video interpolation video, the video resource stored in video editor search notebook computer, adopts the method in the embodiment of the present application to determine the video be associated with family reunion dinner scene in the video resource stored.As, the video stored in notebook computer is " China on the tip of the tongue ", " my one day " and " I likes my family ", because object topmost in family reunion dinner scene is food and household, food is comprised in " China on the tip of the tongue ", family and household is comprised in " I likes my family " video, therefore, video editor is determined " China on the tip of the tongue " and " I likes my family " is the video matched with hogmanay dinner video, now, video editor can generate information reminding user, user is allowed to determine the video that a user thinks from two videos, certainly, also can be the video that video editor is selected according to matching degree more to mate, owing to occurring in hogmanay dinner video that maximum objects is food, then now, video editor is then determined automatically " China on the tip of the tongue " is the highest video of matching degree, then the display styles extracted in " China on the tip of the tongue " video is the display styles of coupling, then the display translation parameter of " China on the tip of the tongue " is obtained.When all not finding the video matched with hogmanay dinner video after all videos stored in video editor search notebook computer, video editor also can go online search automatically, the keyword of search can for the keyword be associated with video scene, as " hogmanay dinner " " reunion " etc., certainly, those skilled in the art also can adopt other modes to determine the keyword searched for, and are not restricted in the embodiment of the present application.When the relevant information of hogmanay dinner video obtained is the dominant hue of the title of video or video, adopts the acquisition output parameter that uses the same method, repeat no more in the embodiment of the present application.
The third mode, please refer to Fig. 9, and the specific implementation of step S102 is:
S10231: determine and the first front cover that at least one video information described matches;
S10232: for described first video adds described first front cover as video front cover;
S10233: the first video that it is video front cover that acquisition has with described first front cover.
In the embodiment of the present application, the specific implementation of step S10231 has the following two kinds situation:
Situation A, please refer to Figure 10:
S1023111: search for preset front cover template;
S1023112: choose the first front cover matched with at least one video information described from described preset front cover template.
In specific implementation process, continue to use above-mentioned example, after video editor obtains the relevant information of video, just automatically for video adds front cover.For the hogmanay dinner video related information obtained for family reunion dinner scene, the front cover template stored in video editor search notebook computer, as, the front cover template stored in notebook computer has " father where go placard ", " I likes my family's placard " and " romantic houseful placard ", then, video editor determines that the keyword be associated with family reunion dinner scene is " family ", " reunion ", then according to keyword, the front cover template prestored is mated, determine that the front cover template matched with keyword is " I likes my family's placard ", and then determine that " I likes my family's placard " is the first front cover, certainly make in the process of front cover concrete, video editor can extract the head portrait of all personages in hogmanay dinner video, then the performer's head portrait I being liked in my family's placard replaces with the personage's head portrait in hogmanay dinner video, can certainly directly use front cover template, be not restricted in the embodiment of the present application.
Situation B, please refer to Figure 11:
S1023121: search for described preset video resource;
S1023122: determine the 4th video matched with at least one video information described from described preset video resource;
S1023123: the front cover extracted in described 4th video is the first front cover.
In the embodiment of the present application, described 4th video is comprise the video of the content be associated with described first scene information or described 3rd video is the video with the tone be associated with described first tone.
In specific implementation process, continue to use above-mentioned example, after video editor obtains the relevant information of video, just automatically for video adds front cover.For the hogmanay dinner video related information obtained for family reunion dinner scene, the video resource stored in video editor search notebook computer, adopts the method in the embodiment of the present application to determine the video be associated with family reunion dinner scene in the video resource stored.As, the video stored in notebook computer is " beautiful China ", " my one day " and " I likes my family ", because object topmost in family reunion dinner scene is food and household, family and household is comprised in " I likes my family " video, therefore, video editor is determined " I likes my family " is the video matched with hogmanay dinner video, when video editor determines that the video of coupling has multiple, information reminding user can be generated, allow user from multiple video, determine the video that a user thinks, certainly, also can be the video that video editor is selected according to matching degree more to mate, be not restricted in the embodiment of the present application.After video editor determines " I likes my family " video for coupling, the front cover extracting " I likes my family " video is the first front cover.When all not finding the video matched with hogmanay dinner video after all videos stored in video editor search notebook computer, video editor also can go online search automatically, the keyword of search can for the keyword be associated with video scene, as " hogmanay dinner " " reunion " etc., certainly, those skilled in the art also can adopt other modes to determine the keyword searched for, and are not restricted in the embodiment of the present application.When the relevant information of hogmanay dinner video obtained is the dominant hue of the title of video or video, adopting uses the same method obtains the first audio frequency of coupling, repeats no more in the embodiment of the present application.
Embodiment two
Based on the inventive concept identical with the embodiment of the present application one, the embodiment of the present application two provides a kind of electronic equipment, please refer to Figure 12, comprising:
First acquiring unit 101, for obtaining at least one video information relevant to the video content of the first video;
First edit cell 102, for based at least one video information described, edits described first video, obtains and has the first video that first presents effect.
In the embodiment of the present application two, the first acquiring unit 101 comprises:
First acquisition module, for obtaining at least one key frame of video at least partially in described first video;
First analysis module, for carrying out discriminance analysis at least one key frame described;
Second acquisition module, for obtaining at least one video information relevant to the video content of described first video.
In the embodiment of the present application two, the first analysis module comprises:
First analyzes submodule, for carrying out content recognition at least one key frame described, analyzes at least one object at least one key frame described;
Correspondingly, the second acquisition module comprises:
First obtains submodule, for based at least one object described, obtains the first corresponding scene information with described first video.
In the embodiment of the present application two, the first analysis module comprises:
Second analyzes submodule, for carrying out colour recognition at least one key frame described, analyzes at least one picture color feature of at least one key frame described;
Correspondingly, the second acquisition module comprises:
Second obtains submodule, for based at least one picture color feature described, obtains the first tone of described first video.
In the embodiment of the present application two, the second acquisition module comprises:
3rd obtains submodule, and for obtaining the first name information of described first video, wherein, it is manually the information of described first video interpolation that described first name information is specially user.
In the embodiment of the present application two, the first edit cell 102 comprises:
First determination module, for determining and the first audio frequency that at least one video information described matches;
First arranges module, for adding described first audio frequency music as a setting for described first video;
3rd acquisition module, for obtaining the first video that to have with described first audio frequency be background music.
In the embodiment of the present application two, the first determination module comprises:
First search submodule, for searching for preset audio resource;
First chooser module, for choosing the first audio frequency matched with at least one video information described from described preset audio resource.
In the embodiment of the present application two, the first determination module comprises:
Second search submodule, for searching for preset video resource;
First determines submodule, for determining from described preset video resource and the second video that at least one video information described matches;
First extracts submodule, is the first audio frequency for the music extracted in described second video.
In the embodiment of the present application two, the first edit cell 102 comprises:
Second determination module, for determining M output parameter value of M the output parameter relevant with presenting effect matched at least one video information described;
Second arranges module, is a described M output parameter value for arranging the value of described M output parameter of described first video;
4th acquisition module, for obtaining first video with a described M output parameter value.
In the embodiment of the present application two, the second determination module comprises:
4th obtains submodule, presents N number of dominant hue corresponding to effect for obtaining N number of presetting;
Second chooser module, for based at least one video information described and described N number of dominant hue, present first effect and present in effect from described N number of presetting and select M output parameter value, wherein, described first present that effect is specially that dominant hue and at least one video information described match present effect.
In the embodiment of the present application two, the second determination module comprises:
3rd search submodule, for searching for described preset video resource;
Second determines submodule, for determining the 3rd video matched with at least one video information described from described preset video resource;
Second extracts submodule, is a described M output parameter value for extracting the value of described M output parameter in described 3rd video.
In the embodiment of the present application two, the first edit cell 102 comprises:
3rd determination module, for determining and the first front cover that at least one video information described matches;
3rd arranges module, for adding described first front cover as video front cover for described first video;
5th acquisition module, for obtaining the first video that to have with described first front cover be video front cover.
In the embodiment of the present application two, the 3rd determination module comprises:
4th search submodule, for searching for preset front cover template;
3rd chooser module, for choosing the first front cover matched with at least one video information described from described preset front cover template.
In the embodiment of the present application two, the 3rd determination module comprises:
5th search submodule, for searching for described preset video resource;
3rd determines submodule, for determining the 4th video matched with at least one video information described from described preset video resource;
3rd extracts submodule, is the first front cover for the front cover extracted in described 4th video.
Embodiment three
Based on the inventive concept identical with the embodiment of the present application one, the embodiment of the present application three provides a kind of electronic equipment, please refer to Figure 13, comprising:
Housing 10;
Processor 20, is arranged in housing 10;
Wherein, processor 20 is for obtaining at least one video information relevant to the video content of the first video; Based at least one video information described, described first video is edited, obtain and there is the first video that first presents effect.
In the embodiment of the present application three, processor 20 for:
Obtain at least one key frame of video at least partially in described first video;
Discriminance analysis is carried out at least one key frame described;
Obtain at least one video information relevant to the video content of described first video.
In the embodiment of the present application three, processor 20 for:
Content recognition is carried out at least one key frame described, analyzes at least one object at least one key frame described;
Based at least one object described, obtain the first corresponding scene information with described first video.
In the embodiment of the present application three, processor 20 for:
Colour recognition is carried out at least one key frame described, analyzes at least one picture color feature of at least one key frame described;
Based at least one picture color feature described, obtain the first tone of described first video.
In the embodiment of the present application three, processor 20 for:
Obtain the first name information of described first video, wherein, it is manually the information of described first video interpolation that described first name information is specially user.
In the embodiment of the present application three, processor 20 for:
Determine and the first audio frequency that at least one video information described matches;
For described first video adds described first audio frequency music as a setting;
The first video that it is background music that acquisition has with described first audio frequency.
In the embodiment of the present application three, processor 20 for:
Search for preset audio resource;
The first audio frequency matched with at least one video information described is chosen from described preset audio resource.
In the embodiment of the present application three, processor 20 for:
Search for preset video resource;
Determine from described preset video resource and the second video that at least one video information described matches;
The music extracted in described second video is the first audio frequency.
In the embodiment of the present application three, described second video is comprise the video of the content be associated with described first scene information or described second video is the video with the tone be associated with described first tone.
In the embodiment of the present application three, processor 20 for:
Determine M output parameter value of M the output parameter relevant with presenting effect matched at least one video information described;
The value arranging described M output parameter of described first video is a described M output parameter value;
Obtain first video with a described M output parameter value.
In the embodiment of the present application three, processor 20 for:
Obtain N number of presetting and present N number of dominant hue corresponding to effect;
Based at least one video information described and described N number of dominant hue, present first effect and present in effect from described N number of presetting and select M output parameter value, wherein, described first present that effect is specially that dominant hue and at least one video information described match present effect.
In the embodiment of the present application three, processor 20 for:
Search for described preset video resource;
The 3rd video matched with at least one video information described is determined from described preset video resource;
The value extracting described M output parameter in described 3rd video is a described M output parameter value.
In the embodiment of the present application three, processor 20 for:
Determine and the first front cover that at least one video information described matches;
For described first video adds described first front cover as video front cover;
The first video that it is video front cover that acquisition has with described first front cover.
In the embodiment of the present application three, processor 20 for:
Search for preset front cover template;
The first front cover matched with at least one video information described is chosen from described preset front cover template.
In the embodiment of the present application three, processor 20 for:
Search for described preset video resource;
The 4th video matched with at least one video information described is determined from described preset video resource;
The front cover extracted in described 4th video is the first front cover.
By the one or more technical schemes in the embodiment of the present application, following one or more technique effect can be realized:
One, due to the technical scheme in the embodiment of the present application, adopt and obtain at least one video information relevant to the video content of the first video, based at least one video information described, described first video is edited, obtain and there is the technological means that first presents the first video of effect, like this, when user is when editing the first video, electronic equipment can automatically based on the information that video content is relevant, be that the first video interpolation presents effect accordingly, do not need to go the manual operation based on user to edit video again, so, efficiently solve and solve electronic equipment of the prior art and there is the low technical problem of the intelligence degree of video editing, realize the technique effect of the intelligence degree improving video editing.
Two, due to the technical scheme in the embodiment of the present application, at least one key frame obtaining video at least partially in described first video is adopted; Discriminance analysis is carried out at least one key frame described; Obtain the technological means of at least one video information relevant to the video content of described first video, like this, when electronic equipment is edited the first video automatically, a part of key frame that electronic equipment only need extract video is analyzed, and do not need to carry out analyzing and processing to whole video, and then substantially reduce the amount of calculation of electronic equipment, achieve the technique effect improving processing speed.
Three, due to the technical scheme in the embodiment of the present application, adopt and determine and the first audio frequency that at least one video information described matches; For described first video adds described first audio frequency music as a setting; Obtain the technological means had with described first audio frequency first video that is background music, like this, when electronic equipment is edited the first video automatically, electronic equipment can automatically according to the video information relevant to the first video obtained, be the background music that the first video adds coupling, achieve the technique effect of the background music automatically adding coupling when video editing for video.
Four, due to the technical scheme in the embodiment of the present application, M the output parameter value determining M the output parameter relevant with presenting effect matched at least one video information described is adopted; The value arranging described M output parameter of described first video is a described M output parameter value; Obtain the technological means with the first video of a described M output parameter value, like this, when electronic equipment is edited the first video automatically, electronic equipment can automatically according to the video information relevant to the first video obtained, be first video add coupling present effect, as display styles, contrast, saturation etc., achieve when video editing automatically for video adds the technique effect presenting effect of coupling.
Five, due to the technical scheme in the embodiment of the present application, adopt and determine and the first front cover that at least one video information described matches; For described first video adds described first front cover as video front cover; Obtain the technological means had with described first front cover first video that is video front cover, like this, when electronic equipment is edited the first video automatically, electronic equipment can automatically according to the video information relevant to the first video obtained, be the video front cover that the first video adds coupling, achieve the technique effect of the video front cover automatically adding coupling when video editing for video.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the flow chart of the method for the embodiment of the present invention, equipment (system) and computer program and/or block diagram.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can being provided to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computer or other programmable data processing device produce device for realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for the step realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
Specifically, the computer program instructions that information processing method in the embodiment of the present application is corresponding can be stored in CD, hard disk, on the storage mediums such as USB flash disk, read by an electronic equipment when the computer program instructions corresponding with information processing method in storage medium or when being performed, comprise the steps:
Obtain at least one video information relevant to the video content of the first video;
Based at least one video information described, described first video is edited, obtain and there is the first video that first presents effect.
Optionally, store in described storage medium to step: obtain at least one video information relevant with the video content of the first video, the computer program instructions of correspondence, when being performed, specifically comprises the steps:
Obtain at least one key frame of video at least partially in described first video;
Discriminance analysis is carried out at least one key frame described;
Obtain at least one video information relevant to the video content of described first video.
Optionally, that store in described storage medium and step: carry out discriminance analysis at least one key frame described, corresponding computer program instructions, when being performed, specifically comprises the steps:
Content recognition is carried out at least one key frame described, analyzes at least one object at least one key frame described;
Correspondingly, store in described storage medium to step: obtain at least one video information relevant with the video content of described first video, the computer program instructions of correspondence, when being performed, specifically comprises the steps:
Based at least one object described, obtain the first corresponding scene information with described first video.
Optionally, that store in described storage medium and step: carry out discriminance analysis at least one key frame described, corresponding computer program instructions, when being performed, specifically comprises the steps:
Colour recognition is carried out at least one key frame described, analyzes at least one picture color feature of at least one key frame described;
Correspondingly, store in described storage medium to step: obtain at least one video information relevant with the video content of described first video, the computer program instructions of correspondence, when being performed, specifically comprises the steps:
Based at least one picture color feature described, obtain the first tone of described first video.
Optionally, store in described storage medium to step: obtain at least one video information relevant with the video content of the first video, the computer program instructions of correspondence, when being performed, specifically comprises the steps:
Obtain the first name information of described first video, wherein, it is manually the information of described first video interpolation that described first name information is specially user.
Optionally, that store in described storage medium and step: based at least one video information described, edit described first video, obtains and has the first video that first presents effect, corresponding computer program instructions, when being performed, specifically comprises the steps:
Determine and the first audio frequency that at least one video information described matches;
For described first video adds described first audio frequency music as a setting;
The first video that it is background music that acquisition has with described first audio frequency.
Optionally, that store in described storage medium and step: determine and the first audio frequency that at least one video information described matches, corresponding computer program instructions, when being performed, specifically comprises the steps:
Search for preset audio resource;
The first audio frequency matched with at least one video information described is chosen from described preset audio resource.
Optionally, that store in described storage medium and step: determine and the first audio frequency that at least one video information described matches, corresponding computer program instructions, when being performed, specifically comprises the steps:
Search for preset video resource;
Determine from described preset video resource and the second video that at least one video information described matches;
The music extracted in described second video is the first audio frequency.
Optionally, that store in described storage medium and step: based at least one video information described, edit described first video, obtains and has the first video that first presents effect, corresponding computer program instructions, when being performed, specifically comprises the steps:
Determine M output parameter value of M the output parameter relevant with presenting effect matched at least one video information described;
The value arranging described M output parameter of described first video is a described M output parameter value;
Obtain first video with a described M output parameter value.
Optionally, store in described storage medium to step: M the output parameter value determining M the output parameter relevant with presenting effect matched with at least one video information described, corresponding computer program instructions, when being performed, specifically comprises the steps:
Obtain N number of presetting and present N number of dominant hue corresponding to effect;
Based at least one video information described and described N number of dominant hue, present first effect and present in effect from described N number of presetting and select M output parameter value, wherein, described first present that effect is specially that dominant hue and at least one video information described match present effect.
Optionally, store in described storage medium to step: M the output parameter value determining M the output parameter relevant with presenting effect matched with at least one video information described, corresponding computer program instructions, when being performed, specifically comprises the steps:
Search for described preset video resource;
The 3rd video matched with at least one video information described is determined from described preset video resource;
The value extracting described M output parameter in described 3rd video is a described M output parameter value.
Optionally, that store in described storage medium and step: based at least one video information described, edit described first video to be edited, obtains and has the first video that first presents effect, corresponding computer program instructions, when being performed, specifically comprises the steps:
Determine and the first front cover that at least one video information described matches;
For described first video adds described first front cover as video front cover;
The first video that it is video front cover that acquisition has with described first front cover.
Optionally, that store in described storage medium and step: determine and the first front cover that at least one video information described matches, corresponding computer program instructions, when being performed, specifically comprises the steps:
Search for preset front cover template;
The first front cover matched with at least one video information described is chosen from described preset front cover template.
Optionally, that store in described storage medium and step: determine and the first front cover that at least one video information described matches, corresponding computer program instructions, when being performed, specifically comprises the steps:
Search for described preset video resource;
The 4th video matched with at least one video information described is determined from described preset video resource;
The front cover extracted in described 4th video is the first front cover.
Although describe the preferred embodiments of the present invention, those skilled in the art once obtain the basic creative concept of cicada, then can make other change and amendment to these embodiments.So claims are intended to be interpreted as comprising preferred embodiment and falling into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (33)

CN201510614251.3A2015-09-232015-09-23Information processing method and electronic instrumentPendingCN105120336A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510614251.3ACN105120336A (en)2015-09-232015-09-23Information processing method and electronic instrument

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510614251.3ACN105120336A (en)2015-09-232015-09-23Information processing method and electronic instrument

Publications (1)

Publication NumberPublication Date
CN105120336Atrue CN105120336A (en)2015-12-02

Family

ID=54668182

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510614251.3APendingCN105120336A (en)2015-09-232015-09-23Information processing method and electronic instrument

Country Status (1)

CountryLink
CN (1)CN105120336A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107295285B (en)*2017-08-112018-07-27腾讯科技(深圳)有限公司Processing method, processing unit and the storage medium of video data
CN108366284A (en)*2017-01-252018-08-03晨星半导体股份有限公司Image processing device and image processing method
CN109168028A (en)*2018-11-062019-01-08北京达佳互联信息技术有限公司Video generation method, device, server and storage medium
CN109992697A (en)*2019-03-272019-07-09联想(北京)有限公司A kind of information processing method and electronic equipment
CN110830845A (en)*2018-08-092020-02-21优视科技有限公司Video generation method and device and terminal equipment
CN110858924A (en)*2018-08-222020-03-03北京优酷科技有限公司Video background music generation method and device
CN111314771A (en)*2020-03-132020-06-19腾讯科技(深圳)有限公司Video playing method and related equipment
CN111462281A (en)*2020-03-312020-07-28北京创鑫旅程网络技术有限公司Poster generation method, device, equipment and storage medium
CN112399261A (en)*2021-01-192021-02-23浙江口碑网络技术有限公司Video data processing method and device
WO2021147949A1 (en)*2020-01-212021-07-29北京达佳互联信息技术有限公司Video recommendation method and apparatus
WO2021258866A1 (en)*2020-06-232021-12-30Guangdong Oppo Mobile Telecommunications Corp., Ltd.Method and system for generating a background music for a video

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101390090A (en)*2006-02-282009-03-18微软公司Object-level image editing
CN102799684A (en)*2012-07-272012-11-28成都索贝数码科技股份有限公司Video-audio file catalogue labeling, metadata storage indexing and searching method
CN103795897A (en)*2014-01-212014-05-14深圳市中兴移动通信有限公司Method and device for automatically generating background music
CN103929640A (en)*2013-01-152014-07-16英特尔公司Techniques For Managing Video Streaming

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101390090A (en)*2006-02-282009-03-18微软公司Object-level image editing
CN102799684A (en)*2012-07-272012-11-28成都索贝数码科技股份有限公司Video-audio file catalogue labeling, metadata storage indexing and searching method
CN103929640A (en)*2013-01-152014-07-16英特尔公司Techniques For Managing Video Streaming
CN103795897A (en)*2014-01-212014-05-14深圳市中兴移动通信有限公司Method and device for automatically generating background music

Cited By (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108366284A (en)*2017-01-252018-08-03晨星半导体股份有限公司Image processing device and image processing method
CN107295285B (en)*2017-08-112018-07-27腾讯科技(深圳)有限公司Processing method, processing unit and the storage medium of video data
CN110830845A (en)*2018-08-092020-02-21优视科技有限公司Video generation method and device and terminal equipment
CN110858924A (en)*2018-08-222020-03-03北京优酷科技有限公司Video background music generation method and device
CN110858924B (en)*2018-08-222021-11-26阿里巴巴(中国)有限公司Video background music generation method and device and storage medium
CN109168028A (en)*2018-11-062019-01-08北京达佳互联信息技术有限公司Video generation method, device, server and storage medium
CN109992697A (en)*2019-03-272019-07-09联想(北京)有限公司A kind of information processing method and electronic equipment
WO2021147949A1 (en)*2020-01-212021-07-29北京达佳互联信息技术有限公司Video recommendation method and apparatus
US11546663B2 (en)2020-01-212023-01-03Beijing Dajia Internet Information Technology Co., Ltd.Video recommendation method and apparatus
CN111314771B (en)*2020-03-132021-08-27腾讯科技(深圳)有限公司Video playing method and related equipment
CN111314771A (en)*2020-03-132020-06-19腾讯科技(深圳)有限公司Video playing method and related equipment
CN111462281A (en)*2020-03-312020-07-28北京创鑫旅程网络技术有限公司Poster generation method, device, equipment and storage medium
CN111462281B (en)*2020-03-312023-06-13北京创鑫旅程网络技术有限公司Poster generation method, device, equipment and storage medium
WO2021258866A1 (en)*2020-06-232021-12-30Guangdong Oppo Mobile Telecommunications Corp., Ltd.Method and system for generating a background music for a video
CN112399261A (en)*2021-01-192021-02-23浙江口碑网络技术有限公司Video data processing method and device

Similar Documents

PublicationPublication DateTitle
CN105120336A (en)Information processing method and electronic instrument
US20100094441A1 (en)Image selection apparatus, image selection method and program
US8421819B2 (en)Pillarboxing correction
JP5341755B2 (en) Determining environmental parameter sets
CN101300567B (en)Method for media sharing and authoring on the web
CN103686344B (en)Strengthen video system and method
JP4125140B2 (en) Information processing apparatus, information processing method, and program
US8542982B2 (en)Image/video data editing apparatus and method for generating image or video soundtracks
US10609794B2 (en)Enriching audio with lighting
CN104581380A (en)Information processing method and mobile terminal
CN107562680A (en)Data processing method, device and terminal device
KR20110050463A (en) Method and apparatus for creating an image collection
KR20070095431A (en) Create a multimedia presentation
US10104356B2 (en)Scenario generation system, scenario generation method and scenario generation program
US20130077937A1 (en)Apparatus and method for producing remote streaming audiovisual montages
EP3671487A2 (en)Generation of a video file
US20180053531A1 (en)Real time video performance instrument
CN117876534A (en)Poster generation method, poster generation device and storage medium
JP2007525900A (en) Method and apparatus for locating content in a program
US20140286624A1 (en)Method and apparatus for personalized media editing
JP7531314B2 (en) Information processing device, control method for information processing device, and program
US20240430544A1 (en)Method for clipping video and electronic device
Hua et al.Lazycut: content-aware template-based video authoring
GB2525841A (en)Image modification
WO2004081940A1 (en)A method and apparatus for generating an output video sequence

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20151202


[8]ページ先頭

©2009-2025 Movatter.jp