Movatterモバイル変換


[0]ホーム

URL:


CN108055490A - A kind of method for processing video frequency, device, mobile terminal and storage medium - Google Patents

A kind of method for processing video frequency, device, mobile terminal and storage medium
Download PDF

Info

Publication number
CN108055490A
CN108055490ACN201711009668.2ACN201711009668ACN108055490ACN 108055490 ACN108055490 ACN 108055490ACN 201711009668 ACN201711009668 ACN 201711009668ACN 108055490 ACN108055490 ACN 108055490A
Authority
CN
China
Prior art keywords
video data
audio
source
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711009668.2A
Other languages
Chinese (zh)
Other versions
CN108055490B (en
Inventor
刘飞跃
田东渭
贾松辉
郭伟
王程博
张志刚
杨玉奇
周朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing environment and Wind Technology Co., Ltd.
Original Assignee
Beijing Chuan Shang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chuan Shang Technology Co LtdfiledCriticalBeijing Chuan Shang Technology Co Ltd
Priority to CN201711009668.2ApriorityCriticalpatent/CN108055490B/en
Publication of CN108055490ApublicationCriticalpatent/CN108055490A/en
Application grantedgrantedCritical
Publication of CN108055490BpublicationCriticalpatent/CN108055490B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

An embodiment of the present invention provides a kind of method for processing video frequency, device, mobile terminal and storage mediums, are related to mobile communication technology field.The method for processing video frequency is applied to mobile terminal, including:Acquisition source audio, video data;According to the source audio data and/or source video data in the source audio, video data, target audio data and/or target video data are recorded;By the target audio data and/or target video data, comparison displaying is carried out with the source audio, video data.The embodiment of the present invention realizes the audio and video comparing function of mobile terminal, meets user demand.

Description

A kind of method for processing video frequency, device, mobile terminal and storage medium
Technical field
The present invention relates to mobile communication technology fields, and in particular to a kind of method for processing video frequency and device, a kind of terminal are setStandby and a kind of storage medium.
Background technology
As the mobile terminals such as the development of mobile communication technology, mobile phone are increasingly popularized, to people life, learnIt practises, work brings great convenience.
These mobile terminals are usually provided with camera so that mobile terminal can be used to carry out shooting photo, record for userVideo etc..In addition, these mobile terminals can also be equipped with various application programs so that user can use mobile terminalIn various application programs perform needed for operation, such as by game application play play, pass through Video Applications issue or broadcastingAudio and video etc..
The content of the invention
The present invention provides a kind of method for processing video frequency, and a kind of corresponding video process apparatus, mobile terminal and storage are situated betweenMatter to realize the audio and video comparing function of mobile terminal, meets user demand.
One side according to the invention provides a kind of method for processing video frequency, applied to mobile terminal, the methodIncluding:Acquisition source audio, video data;According to the source audio data and/or source video data in the source audio, video data, recordTarget audio data and/or target video data;By the target audio data and/or target video data, with the source soundVideo data carries out comparison displaying.
Optionally, according to the source audio data and/or source video data in the source audio, video data, target audio is recordedData and/or target video data, including:The separation source voice data from the source audio, video data, and play the source soundFrequency evidence and target video data is recorded by camera;And/or the separation source video counts from the source audio, video dataAccording to, and play in interface the source video data and by microphone records target audio data.
Optionally, the source audio data and/or source video data according in the source audio, video data record targetVideo data, including:The source audio data and source video data in the source audio, video data are played in interface, and passes through and takes the photographAs head records target video data.
Optionally, the source audio data and source video data played in interface in the source audio, video data, andTarget video data is recorded by camera, including:It is first area and second area by boundary division;In the first areaIt is middle to play the source video data;Target video data is recorded by camera, and shows what is recorded in the second areaTarget video data.
Optionally, it is described by the target audio data and/or target video data, it is carried out with the source audio, video dataComparison displaying, including:Source audio data in the source audio, video data and the target video data recorded are synthesized into targetAudio, video data;The source audio, video data and target audio, video data are synthesized, obtain comparison audio, video data;DisplayingThe comparison audio, video data.
Optionally, it is described by the target audio data and/or target video data, it is carried out with the source audio, video dataComparison displaying, including:The target audio data and target video data of recording are synthesized into target audio, video data;By the sourceAudio, video data and target audio, video data are synthesized, and obtain comparison audio, video data;Show the comparison audio, video data.
Optionally, it is described to synthesize the source audio, video data and target audio, video data, obtain comparison audio and videoData, including:The corresponding each frame source image data of at least one split time is obtained from the source video data;From the meshThe corresponding each frame destination image data of at least one split time is obtained in mark video data;Sequentially in time to each frameSource image data and destination image data are synthesized, and obtain each frame image data in comparison audio, video data.
Optionally, further include:According to preset rules, the corresponding each split time of the source video data arranged in a crossed manner andThe corresponding each split time of the target video data.
Optionally, it is described by the target audio data and/or target video data, it is carried out with the source audio, video dataComparison displaying, including:It is target by the source video data in the source audio, video data and the target audio Data Synthesis recordedAudio, video data;The source audio, video data and target audio, video data are synthesized, obtain comparison audio, video data;DisplayingThe comparison audio, video data.
Optionally, it is described to synthesize the source audio, video data and target audio, video data, obtain comparison audio and videoData, including:The corresponding source audio segment of at least one split time is obtained from the source audio data;From the target soundFrequency obtains the corresponding target audio segment of at least one split time in;Sequentially in time to each source audio segmentIt is synthesized with target audio segment, obtains the voice data in comparison audio, video data.
Optionally, it is described by the target audio data and/or target video data, it is carried out with the source audio, video dataComparison displaying, including:3rd region and the fourth region are set in the interface;By showing the source video in the 3rd regionData, and by showing the target video data in the fourth region, carry out the displaying of comparison video.
Optionally, further include:According to preset rules, the corresponding each split time of the source audio data arranged in a crossed manner andThe corresponding each split time of the target audio data.
According to another aspect of the present invention, a kind of video process apparatus is provided, applied to mobile terminal, the deviceIncluding:Acquisition module, for obtaining source audio, video data;Module is recorded, for according to the source sound in the source audio, video dataFrequency evidence and/or source video data, record target audio data and/or target video data;Display module is compared, for by instituteTarget audio data and/or target video data are stated, comparison displaying is carried out with the source audio, video data.
Optionally, the recording module includes:Video record submodule, for the separation source from the source audio, video dataVoice data, and play the source audio data and target video data is recorded by camera;Audio recording submodule,For the separation source video data from the source audio, video data, and play the source video data in interface and pass throughMicrophone records target audio data.
Optionally, the recording module, specifically for playing the source audio number in the source audio, video data in interfaceAccording to source video data, and pass through camera and record target video data.
Optionally, the recording module includes:Boundary division submodule, for being first area and second by boundary divisionRegion;Source video plays submodule, for playing the source video data in the first area;Video record submodule,For recording target video data by camera, and show in the second area target video data of recording.
Optionally, the comparison display module includes:Target synthesizes submodule, for will be in the source audio, video dataSource audio data and the target video data recorded synthesize target audio, video data;Comparison synthesis submodule, for by described inSource audio, video data and target audio, video data are synthesized, and obtain comparison audio, video data;Comparison displaying submodule, for opening upShow the comparison audio, video data.
Optionally, the comparison display module includes:Target synthesizes submodule, for the target audio data that will record withTarget video data synthesizes target audio, video data;Comparison synthesis submodule, for by the source audio, video data and targetAudio, video data is synthesized, and obtains comparison audio, video data;Comparison displaying submodule, for showing the comparison audio and video numberAccording to.
Optionally, the comparison synthesis submodule includes:Source video acquiring unit, for being obtained from the source video dataTake the corresponding each frame source image data of at least one split time;Target video acquiring unit, for from the target video numberThe corresponding each frame destination image data of at least one split time is obtained according to middle;Video Composition unit, for sequentially in timeEach frame source image data and destination image data are synthesized, obtain each two field picture number in comparison audio, video dataAccording to.
Optionally, further include:Setup module, for according to preset rules, the source video data arranged in a crossed manner to be correspondingEach split time and the corresponding each split time of the target video data.
Optionally, the comparison display module includes:Target synthesizes submodule, for will be in the source audio, video dataSource video data and the target audio Data Synthesis recorded are target audio, video data;Comparison synthesis submodule, for by described inSource audio, video data and target audio, video data are synthesized, and obtain comparison audio, video data;Comparison displaying submodule, for opening upShow the comparison audio, video data.
Optionally, the comparison synthesis submodule includes:Source audio acquiring unit, for being obtained from the source audio dataTake the corresponding source audio segment of at least one split time;Target audio acquiring unit, for from the target video dataObtain the corresponding target audio segment of at least one split time;Audio synthesizer unit, sequentially in time to each audioSegment and target audio segment are synthesized, and obtain the voice data in comparison audio, video data.
Optionally, the comparison display module includes:Region sets submodule, for setting the 3rd area in the interfaceDomain and the fourth region;Comparison displaying submodule, for passing through each frame source figure in showing the source video data in the 3rd regionAs data, and by showing the target video data in the fourth region, carry out the displaying of comparison video.
Optionally, further include:Setup module, for according to preset rules, the source audio data arranged in a crossed manner to be correspondingEach split time and the corresponding each split time of the target audio data.
According to another aspect of the invention, a kind of mobile terminal is provided, including:One or more processors;ThereonOne or more machine readable medias of instruction are stored with, when being performed by one or more of processors so that the shiftingDynamic terminal performs the method for processing video frequency as described in one or more in the embodiment of the present invention.
The embodiment of the present invention additionally provides a kind of machine readable media, is stored thereon with instruction, when by one or moreWhen managing device execution so that method for processing video frequency of the mobile terminal execution as described in one or more in the embodiment of the present invention.
A kind of method for processing video frequency and device according to the present invention, are applied in mobile terminal, can be according to the source sound gotVideo data records target audio data and/or target video data, then can be by the target audio data and/or mesh of recordingVideo data is marked, comparison displaying is carried out with source audio data, it is achieved thereby that the audio and video comparing function of mobile terminal so that is usedFamily can use mobile terminal compared with the audio and video of source, to obtain meeting user demand by the audio of recording and/or videoAdvantageous effect.
Above description is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention,And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage canIt is clearer and more comprehensible, below the special specific embodiment for lifting the present invention.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this fieldTechnical staff will be apparent understanding.Attached drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present inventionLimitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows a kind of step flow chart of method for processing video frequency of one embodiment of the invention;
Fig. 2 shows a kind of step flow chart of method for processing video frequency of an alternative embodiment of the invention;
Fig. 3 shows a kind of structure diagram of video process apparatus of one embodiment of the invention;
Fig. 4 shows a kind of structure diagram of video process apparatus of an alternative embodiment of the invention;
Fig. 5 schematically shows to perform the block diagram of the server of the method according to the invention;
Fig. 6 schematically shows to keep or carry the storage for the program code for realizing the method according to the inventionUnit;And
Fig. 7 is illustrated that the block diagram with the relevant part-structure of mobile terminal provided in an embodiment of the present invention.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in attached drawingExemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth hereIt is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosureCompletely it is communicated to those skilled in the art.
The embodiment of the present invention can be applicable to mobile terminal, which can be equipped with Video Applications, so that userAudio, video data can be handled by Video Applications.For example, user can use the Video Applications in mobile terminal to recordVideo, upload simultaneously issue video etc., and Video Applications can also be used to obtain video data and play, can such as obtain and play otherVideo data that user is issued, etc..
Wherein, video data and voice data can be usually included in audio, video data;Video data can be with for broadcastingThe corresponding video pictures of playback video data, specifically may include a frame or multiple image data, which can be used for opening upShow the corresponding video pictures of video data;The audio that voice data can be used in playing video data, such as the music in video.
Compare service to provide a user audio and video, the mobile terminal during the present invention is implemented is getting source audio and video numberAccording to rear, target audio data and/or mesh can be recorded according to the source audio data and/or source video data in the audio, video data of sourceVideo data is marked, it such as can be according to the source audio data recording target video data in the source audio, video data so that userThe corresponding target audio and video of source audio data recording that can be directed in the audio, video data of source;It then can be by the target sound of recordingFrequency evidence and/or target video data, carry out comparison displaying, it is achieved thereby that the audio and video ratio of mobile terminal with source audio and video numberCompared with function so that user can be using the target video and/or target audio that mobile terminal is recorded it, the source sound with acquisitionVideo is compared, and the audio and video for meeting user compare demand.
Reference Fig. 1 shows a kind of step flow chart of method for processing video frequency of one embodiment of the invention.At the videoReason method can be applied to mobile terminal, specifically may include following steps:
Step 102, source audio, video data is obtained.
In the embodiment of the present invention, source audio, video data can be used for the audio and video that characterization is got, and such as can be to deposit in advanceThe audio and video that the audio and video or mobile terminal stored up in mobile terminal are currently downloaded from server, can also be and pass throughAudio and video that Video Applications in mobile terminal search out etc., the present invention implement not to be restricted this.
Step 104, according to the source audio data and/or source video data in the source audio, video data, target sound is recordedFrequency evidence and/or target video data.
In the embodiment of the present invention, after source audio, video data is got, the mobile terminal playing source audio and video can be passed throughSource video data and/or source audio data in data so that user can watch the source video data and correspond to the video playedPicture and/or audio in the source audio, video data can be heard, as the Video Applications in mobile terminal or mobile terminal can be from sourceSource video data are isolated in audio, video data, then playing video data so that user can be heard in the source audio, video dataAudio;Similarly, source audio data can be also isolated from the audio, video data of source, then playing audio-fequency data;Similarly, Ke YiSource audio data can be isolated from the audio, video data of source, then playing audio-fequency data so that user can watch the source videoData correspond to the video pictures played;For another example can on the interface of Video Applications direct broadcast source audio, video data, etc..
Specifically, while playing, camera and/or mobile terminal built-in in mobile terminal can be started and connectedCamera, to record target video data by camera after startup so that user can be directed to currently playing source audio numberAccording to and/or the shooting required for it of source video data recording target video, to facilitate the target video subsequently recorded using itData carry out video comparison with the source video data in the audio, video data of source.Wherein, the target video data of recording can be used for tableSign is by the video pictures captured by camera, the audio that can be specifically included in source audio, video data or source audio, video dataEach frame destination image data recorded during data playback.
Similarly, the microphone that the microphone set in mobile terminal and/or mobile terminal are connected can also be started, with logicalCross the microphone records target audio data after starting so that user can be directed to currently playing source audio data and/or source regardsIts required target audio recorded of frequency data recording, to facilitate the target audio data and source audio subsequently recorded using itSource audio data in data carry out audio comparison.
Step 106, by the target audio data and/or target video data, carried out pair with the source audio, video dataThan displaying.
In the embodiment of the present invention, the target audio data and/or target video data that will can currently record, with source audio and videoData are compared, and obtain corresponding comparison audio, video data, can such as be compared source video data and target video dataCompared with generating corresponding comparison video data;Source audio data and target audio data can be for another example compared, generation is correspondingCompare voice data;For another example by target video data and/or target audio, video data, synthesized, obtained with source audio, video dataTo comparison audio, video data, etc..It can then be shown on interface and obtain comparison audio, video data so that user can viewIts target video recorded and/or target audio, it is compared with the source audio and video got as a result, so as to meet user'sAudio and video compare demand.
In a kind of optional embodiment, the embodiment of the present invention can be by the target video data currently recorded and source audio and videoSource audio data in data are synthesized, the target audio, video data after being synthesized.The target audio, video data can be usedSuch as can be that user is directed to other in characterization target audio and video of the user based on the source audio data recording in the audio, video data of sourceAudio and video that music in the source audio and video of user's issue is recorded, etc..Then, source audio, video data and target sound can be regardedFrequency generates corresponding comparison audio, video data according to being compared, such as can be by the source audio, video data and target sound video countsAccording to being synthesized, comparison audio, video data is obtained, comparison audio, video data then can be shown on interface so that user Ke ChaIts target video recorded and the comparative result of the source video got are seen, so as to which the video for meeting user compares demand.
In another optional embodiment, the embodiment of the present invention can regard the target audio data currently recorded and source soundSource video data of the frequency in are synthesized, the target audio, video data after being synthesized.The target audio, video data can be withSuch as can be that user is directed to it for characterizing target audio and video of the user based on the source video data recording in the audio, video data of sourceAudio and video that video pictures in the source audio and video of his user's issue are recorded, etc..It then, can be by source audio, video data and meshMark with phonetic symbols video data is compared, and generates corresponding comparison audio, video data, such as can be by the source audio, video data and target soundVideo data is synthesized, and obtains comparison audio, video data, comparison audio, video data then can be shown on interface so that useFamily can get the comparative result of its target audio recorded and the source audio got, so as to which the audio for meeting user comparesDemand.
In another optional embodiment, the embodiment of the present invention can regard the target audio data and target currently recordedFrequency is according to being synthesized, the target audio, video data after being synthesized.The target audio, video data can be used for characterizing user's pinTo the target audio and video of source audio, video data recording, as that can be that the source audio and video that user issues for other users are recordedTarget audio and video, etc..Then, source audio, video data and target audio, video data can be compared, generates corresponding comparisonThen audio, video data can show comparison audio, video data so that user can get its target recorded on interfaceThe comparative result of audio and video and the source audio and video got, so as to which the audio and video for meeting user compare demand.
In an alternate embodiment of the present invention where, the above-mentioned source audio data according in the source audio, video data and/Or source video data, target video data target video data is recorded, can be included:The separation source from the source audio, video dataVoice data, and play the source audio data and target video data is recorded by camera;And/or from the source soundSeparation source video data in video data, and play in interface the source video data and by microphone records targetVoice data.
Specifically, after source audio, video data is got, directly source audio can be isolated from the audio, video data of sourceThen data play the source audio data separated, without playing the source video data in the source audio, video data, i.e., do not open upShow the corresponding video pictures of source audio, video data so that user can hear that the voice data in the source audio, video data is correspondingAudio, and the interim card problem caused by the corresponding video pictures of broadcast source video data can be avoided, it is ensured that voice data is broadcastThe fluency put improves user experience.While broadcast source voice data, can user's current shooting be recorded by cameraThe corresponding each frame destination image data of video pictures, that is, can obtain corresponding to the mesh recorded for currently playing source audio dataVideo data is marked, so that user can record corresponding target video data in voice data playing process, after facilitatingIt is continuous that source audio data and the target video data recorded are synthesized into target audio, video data.It for example, can be by source audio, video dataIn source audio data synthesized with each frame destination image data currently recorded, the target sound video counts after being synthesizedAccording to the target audio and video that is, generation user is recorded for the source audio data in the source audio, video data, so as to make userFor the target audio and video recorded compared with the audio and video of source, the video for meeting user compares demand.
Similarly, the embodiment of the present invention can also divide after source audio, video data is got directly from the audio, video data of sourceSource video data are separated out, then play the source video data separated, without playing the source audio in the source audio, video dataData do not show the corresponding video pictures of source audio, video data so that user can watch the source video data on interfaceCorresponding video pictures, while the interim card problem caused by the source audio data in broadcast source audio, video data can be avoided, reallyThe fluency that video data plays is protected, improves user experience.While broadcast source video data, it can be used by microphone recordsThe corresponding each frame target audio data of target audio that family is currently recorded, that is, can obtain being directed to currently playing source video dataThe corresponding target audio data recorded, so that user can record corresponding target sound frequency in video data playing processAccording to facilitate subsequently by source video data and the target audio Data Synthesis recorded as target audio, video data.It for example, can be by sourceSource video data in audio, video data are synthesized with each frame target audio data currently recorded, the target after being synthesizedAudio, video data generates the target audio and video that user is recorded for the source video data in the source audio, video data, so as toThe target audio and video that family is recorded can be used, and the audio for meeting user compares demand compared with the audio and video of source.
It is of course also possible to after source video data and source audio data are isolated from the audio, video data of source, can playCorresponding target video data is recorded during source audio data and corresponding target video number is recorded in broadcast source video dataAccording to;Corresponding target video data and target audio data can also be recorded in broadcast source voice data;It can also playDuring source video data, corresponding target video data and target audio data, etc. are recorded, the embodiment of the present invention does not limit thisSystem.
In actual treatment, optionally, after source audio, video data is got, it can not also be separated from source audio, video dataGo out source audio data and source video data play out, and direct broadcast source sound regards on the interface of mobile terminal or Video ApplicationsFrequency evidence so that audio of the user in the source audio, video data is listened can watch corresponding video pictures on interface simultaneously, togetherWhen can pass through camera record user's current shooting the corresponding each frame destination image data of video pictures so that user can be in soundIt records corresponding target video data in video data playing process and can currently be recorded by microphone records userThe corresponding each frame target audio data of target audio so that user can be recorded in audio, video data playing process it is correspondingTarget audio data.
With reference to Fig. 2, show a kind of step flow chart of method for processing video frequency of another embodiment of the present invention, specifically may be usedInclude the following steps:
Step 202, source audio, video data is obtained.
Step 204, the source audio data and source video data in the source audio, video data are played in interface, record meshMark video data and/or target audio data.
In an alternate embodiment of the present invention where, according to the source audio data in the source audio, video data and/or sourceVideo data is recorded target video data, can be included:The source audio data in the source audio, video data are played in interfaceWith source video data, and pass through camera and record target video data.That is, the broadcast source audio, video data in interface, andCan target video data be recorded by camera, target sound video counts are synthesized according to the target video data recorded so as to follow-upAccording to, such as can by the voice data in the audio, video data of source and record target video data synthesize target video data.
Certainly, during source sound regards data playback, optionally, the embodiment of the present invention can also pass through microphone records meshVoice data is marked, so as to follow-up according to the target sound Data Synthesis target audio, video data recorded.Another in the present invention canIt selects in embodiment, according to the source audio data and/or source video data in the source audio, video data, records target sound frequencyAccording to can include:The source audio data and source video data in the source audio, video data are played in interface, and pass through MikeWind records target video data.
For example, in user using during mobile terminal, can by starting the Video Applications in mobile terminal, withThe source audio, video data that other users are issued or uploaded is searched in Video Applications.Video Applications are getting source audio, video dataAfterwards, the source audio, video data can be played on interface so that user can watch the source audio, video data that other users are issued;Can target video data be recorded by the camera in mobile terminal simultaneously so that user can be in the broadcasting of source audio, video dataJourney records the corresponding each frame image data of video pictures shot required for it and can lead to microphone records target sound frequencyAccording to so that user can record target audio and target video simultaneously during source audio data playback.
In a kind of optional embodiment of the present invention, the source audio number in the source audio, video data is played in interfaceAccording to source video data, and by camera record target video data, can include:It is first area and the by boundary divisionTwo regions;The source audio, video data is played in the first area;Target video data is recorded by camera, and in instituteState the target video data that recording is shown in second area.Specifically, the embodiment of the present invention can be by the boundary division of Video ApplicationsFor two display areas, one of display area can be described as first area, be regarded with playing the source sound got in first areaFrequency evidence, another display area can be described as second area, to show that the target recorded by camera regards in second areaFrequency is according to corresponding video pictures.For example, Video Applications are after source audio, video data is got, it can be in first area broadcast source soundVideo data, while the target video data currently recorded by camera can be obtained, to show current recording in second areaThe corresponding video pictures of target video data so that user can watch source audio, video data simultaneously and its target recorded regardsFrequency is according to corresponding video pictures.
Step 206, by the target audio data and/or target video data, carried out pair with the source audio, video dataThan displaying.
In an alternate embodiment of the present invention where, it is and described by the target audio data and/or target video dataSource audio, video data carries out comparison displaying, can include:By the source audio data in the source audio, video data and the mesh recordedMark video data synthesizes target video data;The source audio, video data and target audio, video data are synthesized, obtainedCompare audio, video data;Show the comparison audio, video data.
In embodiments of the present invention, optionally, source audio data and the target video data recorded are being synthesized into targetAfter audio, video data, it can be compared according to the source audio, video data and target audio, video data, it such as can be by the source soundVideo data and target audio, video data are synthesized, for another example can be by the Video Applications in mobile terminal by source audio and video numberIt is synthesized according to target video data, obtains comparison audio, video data;Then displaying comparison audio, video data so that Yong HukeView its target video recorded and the comparative result of the source video got.For example, source can be regarded on interface simultaneouslyEach frame source image data of the frequency in and each frame destination image data in target video data so that user checks simultaneouslyTo the corresponding video pictures of each frame source image data and the corresponding video pictures of each frame destination image data, to realize source videoComparison between target video;It can for another example intersect on interface sequentially in time and show that same audio parsing corresponding source regardsDestination image data in source image data and target video data of the frequency in, makes user may compare same audio parsing pairThe video pictures for the source video data answered and the video pictures of target video data, etc..
It is of course also possible to use other modes synthesize target audio, video data, with using the target sound video counts after synthesizingAccording to audio/video comparison is carried out, the target audio data of recording and target video data synthesis target sound video counts such as may be employedAccording to, the target audio data recorded and source video Data Synthesis target audio, video data, etc. can also be used, it is of the invention to implementExample is not restricted this.
In another alternative embodiment of the invention, by the target audio data and/or target video data, with instituteThe source audio, video data of stating carries out comparison displaying, can include:By the source video data in the source audio, video data and recordingTarget audio Data Synthesis is target audio, video data;The source audio, video data and target audio, video data are synthesized,Obtain comparison audio, video data;Show the comparison audio, video data.Such as it can intersect in the terminal sequentially in timeThe corresponding source audio data of each frame source image data and target audio data in same source video segmentation are played, make user comparableThe corresponding source audio of more same video segmentation and target audio, it is achieved thereby that the audio ratio between source audio and target audioCompared with, and then the audio for meeting user compares demand.
It is described by the target audio data and/or target video data in another alternative embodiment of the present invention,Comparison displaying is carried out with the source audio, video data, can be included:The target audio data and target video data of recording are closedAs target audio, video data;The source audio, video data and target audio, video data are synthesized, obtain comparison audio and videoData;Show the comparison audio, video data.So as to, compared with realizing the audio and video between source audio and video and target audio and video,So that user compared with the audio and video of source, that is, can meet and use with the target audio and target video for simultaneously being recorded itThe audio and video at family compare demand.
In a kind of optional embodiment, the comparison audio, video data in the embodiment of the present invention can include:Source sound regardsEach frame destination image data in voice data and each frame source image data and target video data of the frequency in, so as toIt can show that each frame image data in source video data is corresponding on interface in the playing process of comparison audio, video data and regardThe corresponding video pictures of each frame destination image data in frequency picture and target video data.For example, comparison audio, video dataIt may include:The source video data and source audio data, target video data got;For another example comparison audio, video data can wrapIt includes:Target audio data and target video data of source video data, recording in the audio, video data of source, etc..
It is above-mentioned by the target audio data and/or target video data in an alternative embodiment of the invention, with instituteThe source audio, video data of stating carries out comparison displaying, can include:3rd region and the fourth region are set on interface;By the 3rdRegion shows each frame source image data in the source video data, and passes through the fourth region and show the target video data,Carry out the displaying of comparison video.For example, the 3rd region and the fourth region can be set on the interface of Video Applications, in displaying ratioDuring compared with video data, the corresponding video pictures of each frame source image data in showing source video data in the 3rd region,And the corresponding video pictures of each frame destination image data in the fourth region shows target video data, while can play source soundVoice data in video data, to facilitate the corresponding source video of the more same voice data of user and target video.
Optionally, the embodiment of the present invention can also be according to preset rules, and the source video data arranged in a crossed manner are corresponding eachSplit time and the corresponding each split time of the target video data, so as in the process of synthesis comparison audio, video dataIn, each frame source image data and destination image data are synthesized sequentially in time, obtain comparison audio, video dataIn each frame image data.Wherein, preset rules can be configured according to the source audio data in the audio, video data of source, also may be usedTo be configured according to target audio data, the embodiment of the present invention is not restricted this.
It, can be according to preset rules, such as according to the broadcasting dead time of voice data, to source sound in an optional exampleSource audio data in video data are divided, and determine the corresponding playing duration of each segmentation voice data after division.For example,In the case where source audio data to be divided into 3 segmentation voice datas, the playing duration of 3 segmentation voice datas can phaseDeng if the playing durations of 3 segmentation voice datas are all 30 seconds;The playing duration of 3 segmentation voice datas can not also be equal,Playing duration such as first segmentation voice data can be 20 seconds, and the playing duration of second segmentation voice data can be 25 seconds,The playing duration of 3rd segmentation voice data can be 40 seconds, etc..So as to which the corresponding broadcasting of each segmentation voice data can be based onDuration, the corresponding each split time of source video data arranged in a crossed manner and the corresponding each split time of target video data, such as combineCorresponding first split time of source video data is arranged to comparison audio, video data and opens the 0- after broadcasting by above-mentioned first example30 seconds, corresponding first split time of target video data is then arranged to comparison audio, video data and opens the 30-60 after broadcastingSecond, then corresponding second split time of source video data is arranged to comparison audio, video data and opens the seconds of the 60-90 after broadcasting, by meshCorresponding second split time of mark video data is arranged to comparison audio, video data and opens the seconds of the 90-120 after broadcasting, and so on,Until setting up source video and the corresponding split time of target video.
In an optional embodiment of the present invention, it is above-mentioned by the source audio, video data and target audio, video data intoRow synthesis obtains comparison audio, video data, can include:At least one split time is obtained from the source video data to correspond toEach frame source image data;The corresponding each frame target image number of at least one split time is obtained from the target video dataAccording to;Each frame source image data and destination image data are synthesized sequentially in time, obtain comparison audio, video dataIn each frame image data.Then each frame image data compared in audio, video data can be synthesized with voice data,Such as can one or more source audio segments corresponding with the audio, video data of source synthesized, can also be with target sound video countsCorresponding one or more target audio segments synthesize etc. in, obtain corresponding comparison audio, video data.
The embodiment of the present invention can be according to user demand, when such as can be according to the broadcasting of the comparison audio, video data of user settingIt is long, the corresponding each frame source image data of one or more split times is obtained from source video data and is regarded from the targetFrequency obtains the corresponding each frame destination image data of one or more split times in, then sequentially in time to gettingEach frame source image data and destination image data synthesized, obtain comparison audio, video data.It is of course also possible to according to userThe audio data fragment cut in the source audio data of source audio, video data, respectively from source audio, video data and target audio and videoCorresponding segments time corresponding each frame source image data and destination image data are obtained in data, to carry out video comparison, this hairBright embodiment is not restricted this.
Certainly, the embodiment of the present invention can also be according to preset rules, and the source audio data arranged in a crossed manner are each point correspondingSection time and the corresponding each split time of the target audio data, during comparing audio, video data in synthesis,Each source audio segment and target audio segment are synthesized sequentially in time, obtain the sound in comparison audio, video dataFrequency evidence.Wherein, preset rules can be configured according to the source video data in the audio, video data of source, can also be according to targetVideo data is configured, and the embodiment of the present invention is not restricted this.
It, can be according to preset rules, such as according to the playing duration of video data, to source audio and video in an optional exampleSource video data in data are divided, and determine the corresponding playing duration of each segmented video data after division.For example, it is inciting somebody to actionIn the case that video data is divided into 3 video segments, the playing duration of 3 video segments can be equal, such as 3 video segmentsPlaying duration be all 30 seconds;The playing duration of 3 video segments can not also be equal, during such as broadcasting of first video segmentLong to can be 20 seconds, the playing duration of second video segment can be 25 seconds, and the playing duration of the 3rd video segment can be 40 seconds,Etc..So as to, can be based on the corresponding playing duration of each video segment, the corresponding each split time of source audio data arranged in a crossed manner andThe corresponding each split time of target audio data such as combines above-mentioned first example, and source audio data are first point correspondingThe section time is arranged to comparison audio, video data and opens the seconds of the 0-30 after broadcasting, then during first segmentation that target audio data are correspondingBetween be arranged to comparison audio, video data and open seconds of the 30-60 after broadcasting, then by the setting of source audio data corresponding second split timeThe seconds of the 60-90 after broadcasting are opened for comparison audio, video data, corresponding second split time of target audio data is arranged to compareAudio, video data opens the seconds of the 90-120 after broadcasting, and so on, until setting up source audio and target in comparison audio, video dataThe corresponding split time of audio.
In an optional embodiment of the present invention, it is above-mentioned by the source audio, video data and target audio, video data intoRow synthesis obtains comparison audio, video data, can include:At least one split time is obtained from the source audio data to correspond toSource audio segment;The corresponding target audio segment of at least one split time is obtained from the target audio data;According toTime sequencing synthesizes each source audio segment and target audio segment, obtains the audio number in comparison audio, video dataAccording to.Then can by compare audio, video data in voice data with each frame image data with synthesizing, such as can be with source soundIn video data corresponding one or more source video segments synthesized, can also corresponding with target audio, video data oneA or multiple target video segments synthesize etc., obtain corresponding comparison audio, video data.
The embodiment of the present invention can be according to user demand, when such as can be according to the broadcasting of the comparison audio, video data of user settingIt is long, the corresponding source audio segment of one or more split times is obtained from source audio data and from the target sound frequencyThe corresponding target audio segment of one or more split times is obtained according to middle, then the source audio to getting sequentially in timeSegment and target audio segment are synthesized, and obtain comparison audio, video data.It is of course also possible to according to user in source audio and video numberAccording to source audio data in cut one or to a video segment, respectively from source audio, video data and target audio, video dataCorresponding segments time corresponding source audio segment and target audio segment are obtained, to carry out audio comparison, the embodiment of the present invention pairThis is not restricted.
The embodiment of the present invention also can be according to user demand, when such as can be according to the broadcasting of the comparison audio, video data of user settingIt is long, obtained from the audio, video data of source the corresponding source audio and video segment of one or more split times (including:Source audio segment andSource video segment) and the corresponding target sound piece of video of the one or more split times of acquisition from the target audio dataSection (including:Target audio segment and target video segment), then sequentially in time to the source audio and video segment that gets andTarget sound video segment is synthesized, and obtains comparison audio, video data, to carry out audio and video comparison according to comparison audio, video data,The embodiment of the present invention is not restricted this.
For embodiment of the method, in order to be briefly described, therefore it is all expressed as to a series of combination of actions, but this fieldTechnical staff should know, the embodiment of the present invention and from the limitation of described sequence of movement, because implementing according to the present inventionExample, some steps may be employed other orders or are carried out at the same time.Secondly, those skilled in the art should also know, specificationDescribed in embodiment belong to preferred embodiment, necessary to the involved action not necessarily embodiment of the present invention.
Reference Fig. 3 shows a kind of structure diagram of video process apparatus embodiment according to an embodiment of the invention.The video process apparatus can be applied to mobile terminal, can specifically include following module:
Acquisition module 302, for obtaining source audio, video data;
Module 304 is recorded, for according to the source audio data and/or source video data in the source audio, video data, recordTarget audio data and/or target video data processed;
Display module 306 is compared, for by the target audio data and/or target video data, being regarded with the source soundFrequency is according to carrying out comparison displaying.
To sum up, the embodiment of the present invention can be recorded after source audio, video data is obtained according to the source audio, video data gotTarget audio data and/or target video data, then can by the target audio data and/or target video data of recording, withSource audio data carry out comparison displaying, it is achieved thereby that the audio and video comparing function of mobile terminal so that user can use and moveThe audio of recording and/or video compared with the audio and video of source, are obtained the advantageous effect for meeting user demand by dynamic terminal.
Reference Fig. 4 shows a kind of structure diagram of video process apparatus of an alternative embodiment of the invention.
In embodiments of the present invention, optionally, the recording module 304 may include following submodule:
Video record submodule 3042, for the separation source voice data from the source audio, video data, and described in broadcastingSource audio data and target video data is recorded by camera;
Audio recording submodule 3044, for the separation source video data from the source audio, video data, and in interfacePlay the source video data and by microphone records target audio data.
In an alternate embodiment of the present invention where, the recording module 304, specifically for playing the source in interfaceSource audio data and source video data in audio, video data, and pass through camera and record target video data.
In an alternate embodiment of the present invention where, the recording module 304 may include following submodule:
Boundary division submodule 3046, for being first area and second area by boundary division;
Source video plays submodule 3048, for playing the source video data in the first area;
Video record submodule 3042, for recording target video data by camera, and in the second areaShow the target video data recorded.
In an alternate embodiment of the present invention where, the comparison display module 306 may include following submodule:
Target synthesizes submodule 3062, for the source audio data in the source audio, video data and the target recorded to be regardedFrequency evidence synthesizes target audio, video data;
Comparison synthesis submodule 3064, for the source audio, video data and target audio, video data to be synthesized, obtainsTo comparison audio, video data;
Comparison displaying submodule 3066, for showing the comparison audio, video data.
In another alternative embodiment of the invention, the comparison display module 306 may include following submodule:
Target synthesizes submodule 3062, for the target audio data and target video data of recording to be synthesized target soundVideo data;
Comparison synthesis submodule 3064, for the source audio, video data and target audio, video data to be synthesized, obtainsTo comparison audio, video data;
Comparison displaying submodule 3066, for showing the comparison audio, video data.
In embodiments of the present invention, optionally, the comparison synthesis submodule 3064 can include such as lower unit:
Source video acquiring unit, for obtaining the corresponding each frame source of at least one split time from the source video dataImage data;
Target video acquiring unit, it is corresponding each for obtaining at least one split time from the target video dataFrame destination image data;
Video Composition unit, for being closed sequentially in time to each frame source image data and destination image dataInto, obtain comparison audio, video data in each frame image data.
In an alternate embodiment of the present invention where, further include:Setup module 308, for according to preset rules, intersection to be setPut the corresponding each split time of the source video data and the corresponding each split time of the target video data.
In another alternative embodiment of the present invention, the comparison display module 306 may include following submodule:
Target synthesize submodule 3062, for by the source video data in the source audio, video data and record target soundFrequency evidence synthesizes target audio, video data;
Comparison synthesis submodule 3064, for the source audio, video data and target audio, video data to be synthesized, obtainsTo comparison audio, video data;
Comparison displaying submodule 3066, for showing the comparison audio, video data.
In embodiments of the present invention, optionally, the comparison synthesis submodule 3064 can include such as lower unit:
Source audio acquiring unit, for obtaining the corresponding source audio of at least one split time from the source audio dataSegment;
Target audio acquiring unit, for obtaining the corresponding mesh of at least one split time from the target video dataMark audio fragment;
Audio synthesizer unit sequentially in time synthesizes each audio fragment and target audio segment, obtainsCompare the voice data in audio, video data.
In an alternate embodiment of the present invention where, the comparison display module 306 may include following submodule:
Region sets submodule 3068, for setting the 3rd region and the fourth region in the interface;
Comparison displaying submodule 3066, for passing through each frame source images in showing the source video data in the 3rd regionData, and by showing the target video data in the fourth region, carry out the displaying of comparison video.
In embodiments of the present invention, optionally, further include:Setup module 308, it is arranged in a crossed manner for according to preset rulesThe corresponding each split time of source audio data and the corresponding each split time of the target audio data.
It to sum up, can be according to the audio number in the audio, video data of source during the present invention is implemented after source audio, video data is gotAccording to and/or source video data, record target audio data and/or target video data so that user can be directed to source audio and videoThe corresponding target audio and video of data recording;It then can be by the target audio data and/or target video data of recording, with source soundVideo counts carry out comparison displaying, it is achieved thereby that the audio and video comparing function of mobile terminal so that user can use mobile wholeThe target video for being recorded it and/or target audio are held, compared with the source audio and video of acquisition, meets the audio and video of userCompare demand.
For device embodiment, since it is basicly similar to embodiment of the method, so description is fairly simple, it is relatedPart illustrates referring to the part of embodiment of the method.
The all parts embodiment of the present invention can be with hardware realization or to be run on one or more processorSoftware module realize or realized with combination thereof.It will be understood by those of skill in the art that it can use in practiceMicroprocessor either digital signal processor (DSP) come realize in electronic equipment according to embodiments of the present invention some or it is completeThe some or all functions of portion's component.The present invention be also implemented as performing method as described herein a part orThe equipment or program of device (for example, computer program and computer program product) of person's whole.It is such to realize the present invention'sProgram can may be stored on the computer-readable medium or can have the form of one or more signal.Such signalIt can be downloaded from internet website and obtain either providing on carrier signal or providing in the form of any other.Electronics is setIt is standby to may include server (cluster), mobile terminal etc..
An embodiment of the present invention provides a kind of server, including:One or more processors;Be stored thereon with instructionOne or more machine readable medias, when being performed by one or more of processors so that the server performs such asPlug-in unit backup method in the embodiment of the present invention described in one or more.
An embodiment of the present invention provides one or more machine readable medias, are stored thereon with instruction, when by one or moreWhen a processor performs so that server performs the method for processing video frequency as described in one or more in the embodiment of the present invention.
An embodiment of the present invention provides a kind of server, for example, Fig. 5, which is shown, can realize the method according to the inventionServer, such as management server, storage server, application server, cloud control service server cluster etc..Server traditionIt is upper to include processor 510 and computer program product or computer-readable medium in the form of memory 520.Memory 520Can be that the electronics of such as flash memory, EEPROM (electrically erasable programmable read-only memory), EPROM, hard disk or ROM etc is depositedReservoir.Memory 520 has the memory space 530 for the program code 531 for performing any method and step in the above method.For example, each of the various steps being respectively used in realization above method can be included for the memory space 530 of program codeA program code 531.These program codes can read from one or more computer program product or be written to thisIn one or more computer program product.These computer program products include such as hard disk, compact-disc (CD), storage cardOr the program code carrier of floppy disk etc.Such computer program product be usually described with reference to FIG. 6 it is portable orPerson's static memory cell.The storage unit can have with the memory paragraph of 520 similar arrangement of memory in the server of Fig. 5,Memory space etc..Program code can be for example compressed in a suitable form.In general, storage unit includes computer-readable code531 ', you can with the code read by such as such as 510 etc processor, these codes cause when being run by serverThe server performs each step in method described above.
The embodiment of the present invention additionally provides a kind of mobile terminal, including:One or more processors;Be stored thereon withOne or more machine readable medias of instruction, when being performed by one or more of processors so that the mobile terminalPerform the method for processing video frequency as described in one or more in the embodiment of the present invention.
A kind of machine readable media is additionally provided in the embodiment of the present invention, instruction is stored thereon with, when by one or moreWhen processor performs so that method for processing video frequency of the mobile terminal execution as described in one or more in the embodiment of the present invention.
The mobile terminal also provided in one example of the embodiment of the present invention, as shown in fig. 7, for convenience of description, only showingWith relevant part of the embodiment of the present invention, particular technique details does not disclose, refer to present invention method part.It shouldMobile terminal can be include mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant),The arbitrary equipments such as POS (Point of Sales, sale equipment), vehicle-mounted computer.
Fig. 7 is illustrated that the block diagram with the relevant part-structure of mobile terminal provided in an embodiment of the present invention.With reference to figure 7,Mobile terminal includes:Radio frequency (Radio Frequency, RF) circuit 710, memory 720, input unit 730, display unit740th, sensor 750, voicefrequency circuit 760, Wireless Fidelity (wireless fidelity, WiFi) module 770, processor 780,7110 grade components of power supply 790 and camera.It will be understood by those skilled in the art that the mobile terminal structure shown in Fig. 7 is simultaneouslyThe restriction to mobile terminal is not formed, can include either combining some components or not than illustrating more or fewer componentsSame component arrangement.
Each component parts of mobile terminal is specifically introduced with reference to Fig. 7:
RF circuits 710 can be used for receive and send messages or communication process in, signal sends and receivees, particularly, by base stationAfter downlink information receives, handled to processor 780;In addition, the data sending of uplink will be designed to base station.In general, RF circuits 710Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low NoiseAmplifier, LNA), duplexer etc..In addition, RF circuits 710 can also be communicated by wireless communication with network and other equipment.Above-mentioned wireless communication can use any communication standard or agreement, include but not limited to global system for mobile communications (GlobalSystem of Mobile communication, GSM), general packet radio service (General Packet RadioService, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access(Wideband Code Division Multiple Access, WCDMA), Long Term Evolution (Long Term Evolution,LTE), Email, Short Message Service (Short Messaging Service, SMS) etc..
Memory 720 can be used for storage software program and module, and processor 780 is stored in memory 720 by operationSoftware program and module, so as to perform the various function application of mobile terminal and data processing.Memory 720 can be mainIncluding storing program area and storage data field, wherein, storing program area can storage program area, needed at least one function shouldWith program (such as sound-playing function, image player function etc.) etc.;Storage data field can store the use according to mobile terminalData (such as voice data, phone directory etc.) created etc..It is stored in addition, memory 720 can include high random accessDevice, can also include nonvolatile memory, and a for example, at least disk memory, flush memory device or other volatibility are consolidatedState memory device.
Input unit 730 can be used for the number for receiving input or character information and generate to set with the user of mobile terminalIt puts and the input of key signals that function control is related.Specifically, input unit 730 may include touch panel 731 and other are defeatedEnter equipment 732.Touch panel 731, also referred to as touch-screen collect user on it or neighbouring touch operation (such as userUse the operation of any suitable object such as finger, stylus or attachment on touch panel 731 or near touch panel 731),And corresponding attachment device is driven according to preset formula.Optionally, touch panel 731 may include touch detecting apparatus andTwo parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect the letter that touch operation is broughtNumber, transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted intoContact coordinate, then processor 780 is given, and the order that processor 780 is sent can be received and performed.Furthermore, it is possible to usingThe polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave realize touch panel 731.It is defeated except touch panel 731Other input equipments 732 can also be included by entering unit 730.Specifically, other input equipments 732 can include but is not limited to physicsOne or more in keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operation lever etc..
Display unit 740 is available for the information and mobile terminal for showing by information input by user or being supplied to userVarious menus.Display unit 740 may include display panel 741, optionally, liquid crystal display (Liquid may be employedCrystal Display, LCD), the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED)To configure display panel 741.Further, touch panel 731 can cover display panel 741, when touch panel 731 detectsAfter touch operation on or near it, processor 780 is sent to determine the type of touch event, is followed by subsequent processing 780 basis of deviceThe type of touch event provides corresponding visual output on display panel 741.Although in the figure 7, touch panel 731 and displayPanel 741 is the component independent as two to realize the input of mobile terminal and input function, but in certain embodiments,Can be integrated by touch panel 731 and display panel 741 and that realizes mobile terminal output and input function.
Mobile terminal may also include at least one sensor 750, such as optical sensor, motion sensor and other sensingsDevice.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to environmentThe light and shade of light adjusts the brightness of display panel 741, and proximity sensor can close display when mobile terminal is moved in one's earPanel 741 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (generally threeAxis) acceleration size, size and the direction of gravity are can detect that when static, available for identification mobile terminal posture application(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.;The other sensors such as the gyroscope, barometer, hygrometer, thermometer, the infrared ray sensor that can also configure as mobile terminal,This is repeated no more.
Voicefrequency circuit 760, loud speaker 761, microphone 762 can provide the audio interface between user and mobile terminal.SoundThe transformed electric signal of the voice data received can be transferred to loud speaker 761, is converted to by loud speaker 761 by frequency circuit 760Voice signal exports;On the other hand, the voice signal of collection is converted to electric signal by microphone 762, is received by voicefrequency circuit 760After be converted to voice data, it is such as another to be sent to through RF circuits 710 then after voice data output processor 780 is handledVoice data is exported to memory 720 to be further processed by mobile terminal.
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 770Sub- mail, browsing webpage and access streaming video etc., it has provided wireless broadband internet to the user and has accessed.Although Fig. 7 showsGo out WiFi module 770, but it is understood that, and must be configured into for mobile terminal is not belonging to, it completely can be according to needIt to be omitted in the scope of essence for not changing invention.
Processor 780 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connectionA part is stored in storage by running or performing the software program being stored in memory 720 and/or module and callData in device 720 perform the various functions of mobile terminal and processing data, so as to carry out integral monitoring to mobile terminal.It canChoosing, processor 780 may include one or more processing units;Preferably, processor 780 can integrate application processor and modulationDemodulation processor, wherein, the main processing operation system of application processor, user interface and application program etc., modulation /demodulation processingDevice mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 780.
Mobile terminal is further included to the power supply 790 (such as battery) of all parts power supply, it is preferred that power supply can pass through electricityManagement system and processor 780 are logically contiguous, so as to realize management charging, electric discharge and power consumption by power-supply management systemThe functions such as management.
Camera 7110 can perform the function of taking pictures.
Although being not shown, mobile terminal can also be including bluetooth module etc., and details are not described herein.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein.Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of systemStructure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize variousProgramming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hairBright preferred forms.
In the specification provided in this place, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present inventionExample can be put into practice without these specific details.In some instances, well known method, structure is not been shown in detailAnd technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect,Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimesIn example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantorShield the present invention claims the more features of feature than being expressly recited in each claim.It is more precisely, such as followingClaims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim is in itselfSeparate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodimentChange and they are arranged in one or more equipment different from the embodiment.It can be the module or list in embodimentMember or component be combined into a module or unit or component and can be divided into addition multiple submodule or subelement orSub-component.In addition at least some in such feature and/or process or unit exclude each other, it may be employed anyCombination is disclosed to all features disclosed in this specification (including adjoint claim, summary and attached drawing) and so to appointWhere all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification is (including adjoint powerProfit requirement, summary and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generationIt replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodimentsIn included some features rather than other feature, but the combination of the feature of different embodiments means in of the inventionWithin the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointedOne of meaning mode can use in any combination.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and abilityField technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of notElement or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple suchElement.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer realIt is existing.If in the unit claim for listing equipment for drying, several in these devices can be by same hardware branchTo embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fameClaim.
The invention discloses A1, a kind of method for processing video frequency, and applied to mobile terminal, the method includes:
Acquisition source audio, video data;
According to the source audio data and/or source video data in the source audio, video data, record target audio data and/Or target video data;
By the target audio data and/or target video data, comparison displaying is carried out with the source audio, video data.
A2, the method as described in A1, according to the source audio data and/or source video data in the source audio, video data,Target audio data and/or target video data are recorded, including:
The separation source voice data from the source audio, video data, and play the source audio data and pass through camera shootingHead records target video data;And/or
The separation source video data from the source audio, video data, and play in interface the source video data andPass through microphone records target audio data.
A3, the method as described in A1, the source audio data and/or source video number according in the source audio, video dataAccording to, target video data is recorded, including:
The source audio data and source video data in the source audio, video data are played in interface, and are recorded by cameraTarget video data processed.
A4, the method as described in A3, the source audio data played in interface in the source audio, video data and sourceVideo data, and target video data is recorded by camera, including:
It is first area and second area by boundary division;
The source video data are played in the first area;
Target video data is recorded by camera, and the target video data of recording is shown in the second area.
A5, the method as described in A1, it is described by the target audio data and/or target video data, with the source soundVideo data carries out comparison displaying, including:
Source audio data in the source audio, video data and the target video data recorded are synthesized into target audio and videoData;
The source audio, video data and target audio, video data are synthesized, obtain comparison audio, video data;
Show the comparison audio, video data.
A6, the method as described in A1, it is described by the target audio data and/or target video data, with the source soundVideo data carries out comparison displaying, including:
The target audio data and target video data of recording are synthesized into target audio, video data;
The source audio, video data and target audio, video data are synthesized, obtain comparison audio, video data;
Show the comparison audio, video data.
A7, the method as described in A5 or A6, it is described to synthesize the source audio, video data and target audio, video data,Comparison audio, video data is obtained, including:
The corresponding each frame source image data of at least one split time is obtained from the source video data;
The corresponding each frame destination image data of at least one split time is obtained from the target video data;
Each frame source image data and destination image data are synthesized sequentially in time, obtain comparison audio and videoEach frame image data in data.
A8, the method as described in A7, further include:
According to preset rules, the corresponding each split time of source video data arranged in a crossed manner and the target video numberAccording to corresponding each split time.
A9, the method as described in A1, it is described by the target audio data and/or target video data, with the source soundVideo data carries out comparison displaying, including:
It is target audio and video by the source video data in the source audio, video data and the target audio Data Synthesis recordedData;
The source audio, video data and target audio, video data are synthesized, obtain comparison audio, video data;
Show the comparison audio, video data.
A10, the method as described in A6 or A9, it is described to close the source audio, video data and target audio, video dataInto, comparison audio, video data is obtained, including:
The corresponding source audio segment of at least one split time is obtained from the source audio data;
The corresponding target audio segment of at least one split time is obtained from the target audio data;
Each source audio segment and target audio segment are synthesized sequentially in time, obtain comparison audio and video numberVoice data in.
A11, the method as described in A1, it is described by the target audio data and/or target video data, with the source soundVideo data carries out comparison displaying, including:
3rd region and the fourth region are set in the interface;
By showing the source video data in the 3rd region, and by showing the target video number in the fourth regionAccording to, carry out comparison video displaying.
A12, the method as described in A10, further include:
According to preset rules, the corresponding each split time of source audio data arranged in a crossed manner and the target sound frequencyAccording to corresponding each split time.
The invention also discloses B13, a kind of video process apparatus, and applied to mobile terminal, the device includes:
Acquisition module, for obtaining source audio, video data;
Module is recorded, for according to the source audio data and/or source video data in the source audio, video data, recording meshMark voice data and/or target video data;
Display module is compared, for by the target audio data and/or target video data, with the source audio and video numberAccording to carrying out comparison displaying.
B14, the device as described in B13, the recording module include:
Video record submodule for the separation source voice data from the source audio, video data, and plays the source soundFrequency evidence and target video data is recorded by camera;
Audio recording submodule for the separation source video data from the source audio, video data, and plays in interfaceThe source video data and pass through microphone records target audio data.
B15, the device as described in B13,
The recording module is regarded specifically for playing source audio data in the source audio, video data and source in interfaceFrequency evidence, and pass through camera and record target video data.
B16, the device as described in B15, the recording module include:
Boundary division submodule, for being first area and second area by boundary division;
Source video plays submodule, for playing the source video data in the first area;
Video record submodule for recording target video data by camera, and is shown in the second areaThe target video data of recording.
B17, the device as described in B13, the comparison display module include:
Target synthesize submodule, for by the source audio data in the source audio, video data and record target video numberAccording to synthesizing target audio, video data;
Comparison synthesis submodule, for the source audio, video data and target audio, video data to be synthesized, obtains pairCompare audio, video data;
Comparison displaying submodule, for showing the comparison audio, video data.
B18, the device as described in B13, the comparison display module include:
Target synthesizes submodule, for the target audio data and target video data of recording to be synthesized target audio and videoData;
Comparison synthesis submodule, for the source audio, video data and target audio, video data to be synthesized, obtains pairCompare audio, video data;
Comparison displaying submodule, for showing the comparison audio, video data.
B19, the device as described in B17 or B18, the comparison synthesis submodule include:
Source video acquiring unit, for obtaining the corresponding each frame source of at least one split time from the source video dataImage data;
Target video acquiring unit, it is corresponding each for obtaining at least one split time from the target video dataFrame destination image data;
Video Composition unit, for being closed sequentially in time to each frame source image data and destination image dataInto, obtain comparison audio, video data in each frame image data.
B20, the device as described in B19, further include:
Setup module, for according to preset rules, the corresponding each split time of the source video data arranged in a crossed manner and instituteState the corresponding each split time of target video data.
B21, the device as described in B13, the comparison display module include:
Target synthesize submodule, for by the source video data in the source audio, video data and record target sound frequencyAccording to synthesizing target audio, video data;
Comparison synthesis submodule, for the source audio, video data and target audio, video data to be synthesized, obtains pairCompare audio, video data;
Comparison displaying submodule, for showing the comparison audio, video data.
B22, the device as described in B18 or B21, the comparison synthesis submodule include:
Source audio acquiring unit, for obtaining the corresponding source audio of at least one split time from the source audio dataSegment;
Target audio acquiring unit, for obtaining the corresponding mesh of at least one split time from the target video dataMark audio fragment;
Audio synthesizer unit sequentially in time synthesizes each audio fragment and target audio segment, obtainsCompare the voice data in audio, video data.
B23, the device as described in B13, the comparison display module include:
Region sets submodule, for setting the 3rd region and the fourth region in the interface;
Comparison displaying submodule, for passing through each frame source images number in showing the source video data in the 3rd regionAccording to, and by showing the target video data in the fourth region, carry out the displaying of comparison video.
B24, the device as described in B22, further include:
Setup module, for according to preset rules, the corresponding each split time of the source audio data arranged in a crossed manner and instituteState the corresponding each split time of target audio data.
The invention also discloses C 25, a kind of mobile terminal, including:One or more processors;With
One or more machine readable medias of instruction are stored thereon with, are performed when by one or more of processorsWhen so that method for processing video frequency of the mobile terminal execution as described in one or more in A1-A12.
The invention also discloses D26, a kind of machine readable medias, are stored thereon with instruction, are handled when by one or moreWhen device performs so that method for processing video frequency of the mobile terminal execution as described in one or more in A1-A12.

Claims (10)

CN201711009668.2A2017-10-252017-10-25Video processing method and device, mobile terminal and storage mediumActiveCN108055490B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201711009668.2ACN108055490B (en)2017-10-252017-10-25Video processing method and device, mobile terminal and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201711009668.2ACN108055490B (en)2017-10-252017-10-25Video processing method and device, mobile terminal and storage medium

Publications (2)

Publication NumberPublication Date
CN108055490Atrue CN108055490A (en)2018-05-18
CN108055490B CN108055490B (en)2021-04-13

Family

ID=62119658

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201711009668.2AActiveCN108055490B (en)2017-10-252017-10-25Video processing method and device, mobile terminal and storage medium

Country Status (1)

CountryLink
CN (1)CN108055490B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108600825A (en)*2018-07-122018-09-28北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
CN108668164A (en)*2018-07-122018-10-16北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
CN109151356A (en)*2018-09-052019-01-04传线网络科技(上海)有限公司video recording method and device
CN109168076A (en)*2018-11-022019-01-08北京字节跳动网络技术有限公司Method for recording, device, server and the medium of online course
CN109274900A (en)*2018-09-052019-01-25浙江工业大学 Video dubbing method
CN109348155A (en)*2018-11-082019-02-15北京微播视界科技有限公司Video recording method, device, computer equipment and storage medium
CN109379633A (en)*2018-11-082019-02-22北京微播视界科技有限公司Video editing method, device, computer equipment and readable storage medium storing program for executing
CN109587549A (en)*2018-12-052019-04-05广州酷狗计算机科技有限公司Video recording method, device, terminal and storage medium
CN109788308A (en)*2019-02-012019-05-21腾讯音乐娱乐科技(深圳)有限公司Audio/video processing method, device, electronic equipment and storage medium
CN111583972A (en)*2020-05-282020-08-25北京达佳互联信息技术有限公司Singing work generation method and device and electronic equipment
EP3823270A4 (en)*2018-07-122021-08-11Beijing Microlive Vision Technology Co., Ltd VIDEO PROCESSING METHOD AND DEVICE, AS WELL AS TERMINAL DEVICE AND STORAGE MEDIUM

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101261864A (en)*2008-04-212008-09-10中兴通讯股份有限公司 A method and system for realizing recording synthesis on a mobile terminal
CN104967900A (en)*2015-05-042015-10-07腾讯科技(深圳)有限公司Video generating method and video generating device
CN105959773A (en)*2016-04-292016-09-21魔方天空科技(北京)有限公司Multimedia file processing method and device
CN106060388A (en)*2016-06-242016-10-26惠州紫旭科技有限公司Full-automatic micro-course recording control method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101261864A (en)*2008-04-212008-09-10中兴通讯股份有限公司 A method and system for realizing recording synthesis on a mobile terminal
CN104967900A (en)*2015-05-042015-10-07腾讯科技(深圳)有限公司Video generating method and video generating device
CN105959773A (en)*2016-04-292016-09-21魔方天空科技(北京)有限公司Multimedia file processing method and device
CN106060388A (en)*2016-06-242016-10-26惠州紫旭科技有限公司Full-automatic micro-course recording control method and system

Cited By (25)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108600825B (en)*2018-07-122019-10-25北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
WO2020010814A1 (en)*2018-07-122020-01-16北京微播视界科技有限公司Method and apparatus for selecting background music for video capture, terminal device, and medium
GB2589506A (en)*2018-07-122021-06-02Beijing Microlive Vision Tech Co LtdMethod and apparatus for selecting background music for video capture, terminal device, and medium
EP3823270A4 (en)*2018-07-122021-08-11Beijing Microlive Vision Technology Co., Ltd VIDEO PROCESSING METHOD AND DEVICE, AS WELL AS TERMINAL DEVICE AND STORAGE MEDIUM
WO2020010815A1 (en)*2018-07-122020-01-16北京微播视界科技有限公司Method for selecting background music and capturing video, device, terminal apparatus, and medium
GB2589506B (en)*2018-07-122023-05-31Beijing Microlive Vision Tech Co LtdMethod and apparatus for selecting background music for video capture, terminal device, and medium
US11030987B2 (en)2018-07-122021-06-08Beijing Microlive Vision Technology Co., Ltd.Method for selecting background music and capturing video, device, terminal apparatus, and medium
CN108600825A (en)*2018-07-122018-09-28北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
CN108668164A (en)*2018-07-122018-10-16北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
US11206448B2 (en)2018-07-122021-12-21Beijing Microlive Vision Technology Co., Ltd.Method and apparatus for selecting background music for video shooting, terminal device and medium
CN109274900A (en)*2018-09-052019-01-25浙江工业大学 Video dubbing method
CN109151356A (en)*2018-09-052019-01-04传线网络科技(上海)有限公司video recording method and device
CN109168076B (en)*2018-11-022021-03-19北京字节跳动网络技术有限公司Online course recording method, device, server and medium
CN109168076A (en)*2018-11-022019-01-08北京字节跳动网络技术有限公司Method for recording, device, server and the medium of online course
WO2020093876A1 (en)*2018-11-082020-05-14北京微播视界科技有限公司Video editing method and apparatus, computer device and readable storage medium
JP2021520165A (en)*2018-11-082021-08-12北京微播視界科技有限公司Beijing Microlive Vision Technology Co.,Ltd. Video editing methods, devices, computer devices and readable storage media
US11164604B2 (en)*2018-11-082021-11-02Beijing Microlive Vision Technology Co., Ltd.Video editing method and apparatus, computer device and readable storage medium
CN109379633A (en)*2018-11-082019-02-22北京微播视界科技有限公司Video editing method, device, computer equipment and readable storage medium storing program for executing
JP7122395B2 (en)2018-11-082022-08-19北京微播視界科技有限公司 Video editing method, device, computer device and readable storage medium
CN109348155A (en)*2018-11-082019-02-15北京微播视界科技有限公司Video recording method, device, computer equipment and storage medium
CN109587549A (en)*2018-12-052019-04-05广州酷狗计算机科技有限公司Video recording method, device, terminal and storage medium
CN109788308A (en)*2019-02-012019-05-21腾讯音乐娱乐科技(深圳)有限公司Audio/video processing method, device, electronic equipment and storage medium
CN109788308B (en)*2019-02-012022-07-15腾讯音乐娱乐科技(深圳)有限公司Audio and video processing method and device, electronic equipment and storage medium
CN111583972A (en)*2020-05-282020-08-25北京达佳互联信息技术有限公司Singing work generation method and device and electronic equipment
US20210375246A1 (en)*2020-05-282021-12-02Beijing Dajia Internet Information Technology Co., Ltd.Method, device, and storage medium for generating vocal file

Also Published As

Publication numberPublication date
CN108055490B (en)2021-04-13

Similar Documents

PublicationPublication DateTitle
CN108055490A (en)A kind of method for processing video frequency, device, mobile terminal and storage medium
US11355157B2 (en)Special effect synchronization method and apparatus, and mobile terminal
EP3208700A1 (en)Iot device, mobile terminal and method for controlling the iot device with vibration pairing
CN108600605A (en)Mobile terminal and its control method
CN108012090A (en)A kind of method for processing video frequency, device, mobile terminal and storage medium
CN108055567B (en)Video processing method and device, terminal equipment and storage medium
CN107562864A (en)A kind of advertisement screen method, mobile terminal and computer-readable recording medium
CN107613404A (en)Video broadcasting method, device and terminal
CN107911739A (en)A kind of video acquiring method, device, terminal device and storage medium
CN107783808A (en)A kind of terminal processing method, terminal and computer-readable recording medium
CN108024123A (en)A kind of live video processing method, device, terminal device and server
CN107707828A (en)A kind of method for processing video frequency and mobile terminal
CN109660911A (en)Recording sound effect treatment method, device, mobile terminal and storage medium
CN113112986A (en)Audio synthesis method, apparatus, device, medium, and program product
CN107943417A (en)Image processing method, terminal, computer-readable storage medium and computer program
CN107729115A (en)A kind of display methods, equipment and computer-readable storage medium
CN107786876A (en)The synchronous method of music and video, device and mobile terminal
CN109739464A (en)Sound effect setting method and device, terminal and storage medium
CN106713840A (en) Virtual information display method and device
CN109471524A (en) Method for controlling vibration of motor and mobile terminal
CN107103074B (en)Processing method of shared information and mobile terminal
CN102455872B (en)Mobile terminal and displaying method thereof
CN106713636B (en)Loading method, device and the mobile terminal of image data
CN106471493B (en) Method and apparatus for managing data
CN104408051B (en)Song recommendations method and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20180917

Address after:100015, 15 floor, 3 building, 10 Jiuxianqiao Road, Chaoyang District, Beijing, 17 story 1701-48A

Applicant after:Beijing environment and Wind Technology Co., Ltd.

Address before:100012 No. 28 building, No. 27 building, Lai Chun Yuan, Chaoyang District, Beijing, No. 28, 2, 201, No. 112, No. 28.

Applicant before:Beijing Chuan Shang Technology Co., Ltd.

TA01Transfer of patent application right
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp