Movatterモバイル変換


[0]ホーム

URL:


CN106911900A - Video dubbing method and device - Google Patents

Video dubbing method and device
Download PDF

Info

Publication number
CN106911900A
CN106911900ACN201710220247.8ACN201710220247ACN106911900ACN 106911900 ACN106911900 ACN 106911900ACN 201710220247 ACN201710220247 ACN 201710220247ACN 106911900 ACN106911900 ACN 106911900A
Authority
CN
China
Prior art keywords
video
tag
request
video segment
dubbed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710220247.8A
Other languages
Chinese (zh)
Inventor
黄思军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN201710220247.8ApriorityCriticalpatent/CN106911900A/en
Publication of CN106911900ApublicationCriticalpatent/CN106911900A/en
Priority to PCT/CN2018/080657prioritypatent/WO2018184488A1/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention discloses a kind of video dubbing method and device, belong to video editing techniques field.Methods described includes:In video display process, request is dubbed in reception;Initial time and the finish time that request determines the video segment that needs are dubbed are dubbed according to described;Play the video segment between the initial time and the finish time;During the video segment is played, to record and dub file corresponding to the video segment;Intercept the video segment in the video;According to the video segment and described dub file synthesis target video fragment.Solving terminal in the prior art can only dub to existing video, and when existing video includes the fragment that user need not dub, the video after dubbing includes the problem of redundancy;Reached terminal can only to need video segment dub, reduce redundancy effect.

Description

Video dubbing method and device
Technical field
The present embodiments relate to video editing techniques field, more particularly to a kind of video dubbing method and device.
Background technology
User can dub during video is watched for video.
A kind of existing video dubbing method includes:The video of terminal plays regular length;During video is played,Open sound-recording function and file is dubbed in recording;Hereafter by video and dub file and synthesized, the video after being dubbed.
Due to that may include the fragment that need not dub of user in video, thus above-mentioned dubbing method obtain dub afterRedundancy may be included in video.
The content of the invention
In order to solve problems of the prior art, a kind of video dubbing method and dress are the embodiment of the invention providesPut.Technical scheme is as follows:
First aspect according to embodiments of the present invention, there is provided a kind of video dubbing method, the method includes:
In video display process, request is dubbed in reception;
Initial time and the finish time that request determines the video segment that needs are dubbed are dubbed according to described;
Play the video segment between the initial time and the finish time;
During the video segment is played, to record and dub file corresponding to the video segment;
Intercept the video segment in the video;
According to the video segment and described dub file synthesis target video fragment.
Second aspect according to embodiments of the present invention, there is provided a kind of video dubbing installation, the device includes:
First receiver module, in video display process, request to be dubbed in reception;
Determining module, asks to determine the initial time for needing the video segment dubbed with the end of for being dubbed according toCarve;
Playing module, for playing the initial time and the regarding between the finish time that the determining module determinesFrequency fragment;
Module is recorded, during playing the video segment in the playing module, the video segment is recordedCorresponding dubs file;
Interception module, for intercepting the video segment in the video;
Synthesis module, for according to the video segment and described dubbing file synthesis target video fragment.
The beneficial effect that technical scheme provided in an embodiment of the present invention is brought is:
By after receiving and dubbing request, according to dubbing the initial time of video segment that request determines to need to dubAnd finish time, and when in playing video in the video segment between initial time and finish time, record the piece of videoFile is dubbed corresponding to section, video segment is intercepted, and then target is generated according to dubbing file and intercepting the video segment for obtainingVideo segment;Solving terminal in the prior art can only dub to existing video, when existing video includes userDuring the fragment that need not be dubbed, the video after dubbing includes the problem of redundancy;Reached terminal only can regard to needsFrequency fragment is dubbed, and reduces the effect of redundancy.
Brief description of the drawings
Technical scheme in order to illustrate more clearly the embodiments of the present invention, below will be to that will make needed for embodiment descriptionAccompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, forFor those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawingsAccompanying drawing.
Fig. 1 is the schematic diagram of the implementation environment involved by the embodiment of the present invention;
Fig. 2 is the flow chart of the video dubbing method that one embodiment of the invention is provided;
Fig. 3 is schematic diagram when option is dubbed in user's triggering that one embodiment of the invention is provided;
Fig. 4 is the schematic diagram that the user that one embodiment of the invention is provided sets start-tag;
Fig. 5 is the schematic diagram of the terminal preview video frame that one embodiment of the invention is provided;
Fig. 6 is the schematic diagram that the user that one embodiment of the invention is provided stops dubbing;
Fig. 7 is that the user that one embodiment of the invention is provided starts to dub and cancel the schematic diagram dubbed;
Fig. 8 is the terminal of one embodiment of the invention offer from the flow chart of background server foradownloaded video fragment;
Fig. 9 is the schematic diagram of the terminal preview target video fragment that one embodiment of the invention is provided;
Figure 10 is the flow chart for sharing target video fragment that one embodiment of the invention is provided;
Figure 11 is the schematic diagram for sharing target video fragment that one embodiment of the invention is provided;
Figure 12 is the schematic diagram of the video dubbing installation that one embodiment of the invention is provided;
Figure 13 is the schematic diagram of the terminal that one embodiment of the invention is provided.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present inventionFormula is described in further detail.
The video dubbing method that following each embodiment are provided is applied in terminal, and the terminal possesses audio collection energyPower.Such as, the terminal can be smart mobile phone, panel computer, electronic reader, the desktop computer etc. that is connected with microphone,This is not limited.Video player for playing video is installed, the video player can when actually realizing, in terminalThink the player that terminal is carried, or the player that user actively downloads and installs, this is not limited.
The video dubbed in following each embodiment can be that the video that terminal local is preserved can also be online broadcastingVideo.Wherein, the video that the video that terminal local is preserved can be prerecorded for terminal, or terminal takes from backstage in advanceThe video downloaded and preserve in business device, does not limit this.
Also, in the video that video is online broadcasting, the video dubbing method can apply to the implementation described in Fig. 1Jing Zhong.The implement scene includes terminal 110 (being provided with video player 111) and background server 120.Wherein, terminal 110It is above-mentioned described terminal, terminal 110 can be connected by wired or wireless network with background server 120.Background serviceDevice 120 is the background server corresponding to video player 111, and the background server 120 can be a server, it is also possible toIt is the server cluster being made up of multiple servers, this is not limited.
Fig. 2 is refer to, the method flow diagram of the video dubbing method provided it illustrates one embodiment of the invention is such as schemedShown in 2, the video dubbing method can include:
Step 201, in video display process, request is dubbed in reception.
When video player of the user in using terminal plays video, if it is desired to which a certain video segment in for video is matched somebody with somebodySound, then user can apply to dub request in the terminal, accordingly, terminal can receive this and dub request.
Such as, when user wants to dub, (1) figure in Fig. 3 is refer to, user can be clicked in video playback interfaceOptional position, terminal is received after click signal, and option 31 is dubbed shown in (2) figure in displaying Fig. 3.Hereafter, Yong HukeOption 31 is dubbed to click on, the click signal that terminal is received is dubs request.When actually realizing, (2) figure in such as Fig. 3Shown, terminal can also show other options after receiving click signal, such as " selected works ", " barrage " and " screenshotss " etc.Deng will not be repeated here.
Step 202, according to the initial time and finish time of dubbing the request video segment that determination needs are dubbed.
After terminal is received and dubs request, terminal can determine initial time and the finish time dubbed.
Alternatively, this step can include:
First, receive dub request after, the first predetermined position in the playing progress bar of video showsBeginning label, end-tag is shown in the second predetermined position of playing progress bar.
Terminal receive dub request after, start-tag and end-tag can be shown in playing progress bar.Wherein,Start-tag is used to represent original position of the video segment of interception in the video, and end-tag is used to represent the video of interceptionEnd position of the fragment in the video.
Alternatively, the first predeterminated position can be the default location in playing progress bar.Such as, the start bit of the videoPut, or, the play position of video when dubbing request is received, or, the centre position of the video is not limited this.Second predeterminated position can be the position that predetermined time interval is differed with start-tag.Wherein, predetermined time interval can be regardingThe interval of default in frequency player, or the advance customized interval of user, does not limit this.It is actual to realizeWhen, the predetermined time interval can be 30s.If it should be noted that the end position of start-tag position and video itBetween time interval be less than predetermined time interval, then end-tag may be at the end position of video, this is not limited.Certainly, above-mentioned is so that the first predeterminated position is default location as an example, when actually realizing, it is also possible to be embodied as the second predeterminated positionIt is default location, the first predeterminated position is to be separated by the position at predetermined time interval before end-tag, and this is not limitedIt is fixed.
Second, the first slip signals for sliding start-tag are received, slide start-tag.
First slip signals can be to slide to the left or the slip signals slided to the right, and start-tag slip away fromFrom the sliding distance of as the first slip signals, will not be repeated here.
After terminal display start-tag and end-tag, if the location of start-tag is not that user is desiredInterception position, then refer to (1) figure in Fig. 4, and user can apply to slide the first slip signals of start-tag 41, accordingly, terminal can receive first slip signals.Terminal is received after the first slip signals, can accordingly slide startingLabel 41.Such as, (2) figure in Fig. 4 is refer to, terminal is received after the first slip signals, can be by start-tag 41 from APosition slides into B location.
3rd, receive the second slip signals for sliding end-tag.
Similar with above-mentioned second step, user can also apply to slide the second slip signals of end-tag, accordingly,Terminal can receive second slip signals.
It should be noted that second step and the 3rd step are optional step, if the starting mark of terminal initial displayingSign and the position of end-tag is the position that user expects interception, then second and third step can not be now performed, to thisDo not limit.
4th, the moment corresponding to start-tag is obtained for initial time.
Moment corresponding to start-tag can be defined as initial time by terminal.Such as, the position where start-tagIt is 23 ' 30 in film " undiscovered talents " ", then initial time is 23 ' 30 ".Again such as, where the start-tag after slipPosition is 28 ' 37 ", then initial time is 28 ' 37 ".
5th, the moment corresponding to end-tag is obtained for finish time.
Similar, the moment corresponding to end-tag can also be defined as finish time by terminal.
It should be added that, in the present embodiment, if terminal performs above-mentioned second step, sliding starting markAfter label, terminal can be with the frame of video at preview start-tag.When actually realizing, terminal can be in the window based on start-tagThe middle preview frame of video, or, terminal can also show the video at the center at video playback interface with default sizeFrame.Such as, (1) figure and (2) figure in Fig. 5 are refer to, which respectively show two kinds of possible preview modes.Certainly, it is actual realNow, terminal can also otherwise preview video frame, this is not limited.Similar, if terminal performs the above-mentioned 3rdIndividual step, then after end-tag is slided, the frame of video at preview end-tag.
By after start-tag or end-tag is slided, the frame of video of preview corresponding position so that Yong HukeIntuitively to know the original position and end position of the video segment of interception, and then obtain the video segment of oneself needs.
It is above-mentioned that initial time is simply determined and as a example by finish time by terminal through the above way, alternatively, as anotherKind possible implementation, can include the step of determine initial time and finish time:
First, determine initial time.
This step includes:Using the predetermined time in video as initial time.Wherein, predetermined time can be rising for videoBegin moment, intermediate time or to receive moment for dubbing request etc..Wherein, the moment for dubbing request is received to receiveMoment when dubbing request corresponding to the playing progress rate of video.Such as, receive when dubbing request, video playback to 34 ' 48 ",Then initial time is 34 ' 48 ".
Second, determine finish time.
This step includes following possible implementation.
In the first possible implementation, the moment after postponing preset duration from initial time is defined as to terminateMoment.Wherein, preset duration can be the duration of system setting in video player, or when user is customized in advanceIt is long, this is not limited.
In second possible implementation, receive stopping and dub request, the moment for stopping dubbing request will be receivedAs finish time.Wherein, it is to receive to stop the broadcasting of video when dubbing request and enter to receive the moment for stopping dubbing requestDegree.
After receiving and dubbing request, the displaying of terminal history is dubbed option and can be updated to stopping and dub option, thanSuch as, Fig. 6 is refer to, terminal can show that option 61 is dubbed in stopping.Hereafter, user can apply to click on and stop dubbing option 61Click signal, the click signal that terminal is received is stopping and dubs request.
Certainly, terminal class can also determine initial time and finish time, this implementation by other means when actually realizingExample is not limited this.
Step 203, the video segment in broadcasting video between initial time and finish time.
When actually realizing, terminal can not be limited this it is determined that after initial time, play the video segment.
Alternatively, terminal receive dub request after, terminal can show in broadcast interface start option and cancellationOption.Wherein, start option to start to dub for triggering, cancel option and dubbed for triggering cancellation.Such as, Fig. 7 is refer to, eventuallyEnd can show beginning option 71 and cancel option 72.When user wants to start to dub, user can apply selection and start choosingThe selection signal of item 71, accordingly, terminal can receive selection signal, and the video is played after selection signal is receivedFragment.And when the desired cancellation of user is dubbed, user can apply the OPTION signal that option 72 is cancelled in selection, accordingly, terminalReceive after selection signal, jump to video playback interface.
Step 204, during video segment is played, file is dubbed corresponding to recorded video fragment.
During video segment is played, terminal opens microphone, is gathered by microphone and dubs file.Alternatively,Terminal can start a thread for voice recording, be write into CACHE DIRECTORY the voice that microphone is gathered by the thread,And then save as and dub file.Wherein, the default form that the form for dubbing file of recording can be provided for system in terminal, it is rightThis is not limited.
It should be noted that during due to actually realizing, during user dubs, the original audio in video is typicallyUser it is not expected that information, therefore, play video segment when, in order to avoid the interference of the original audio in video, terminalThe image information in video segment can be only played, and does not play audio-frequency information, this is not limited.
Step 205, the video segment in interception video.
When actually realizing, if video is the video that terminal local is preserved, terminal can directly intercept regarding for local preservationVideo segment in frequency between initial time and finish time.
And if video is the video that terminal is played online, terminal can be constantly slow during video segment is playedThe content of the video segment is deposited, and then final interception obtains the video segment;Alternatively, terminal can also be it is determined that initial timeAfter finish time, transmission downloads request to background server, receives the video segment that background server is returned.Wherein,Download request includes initial time, finish time and video labeling, or, initial time, mesh can be included in download requestTimestamp is long and video labeling, the time difference during target between a length of finish time and initial time.Alternatively, background serverAfter download request is received, can be according to initial time and finish time or initial time and target duration generation videoFragment, and download address is fed back into terminal, terminal is received after download address, opens download thread, and by the downloadThread downloads the video segment from the download address.Such as, Fig. 8 is refer to, it illustrates complete download flow.
Terminal can be according to the size of video segment one piece of internal memory of pre- first to file, after interception obtains video segment, willThe video segment is read in internal memory.
Step 206, according to video segment and dubbing file synthesis target video fragment.
This step can include:
First, extract the image information in video segment.
Alternatively, terminal can read the content in internal memory by Streaming Media interface, and because video segment is from originalThe content being truncated in video, can simultaneously include audio and image, and audio and image are the Media Streams of two-way independence, therefore,Terminal can separate the audio in video segment and image, and audio memory field and image memory in internal memory are stored in respectivelyArea.So terminal can acquire the image information in video segment.
Second, composograph information and the voice messaging in file is dubbed, and obtain target video fragment.
Image information and record the voice messaging dubbed in file for obtaining write-in simultaneously extremely that terminal will can getIn one video file, and then obtain target video fragment.Alternatively, terminal be able to will be schemed by the Streaming Media interface of systemAs information and dub the voice messaging in file and be compressed in one piece of memory field, then again by Streaming Media interface by memory fieldContent write into video file, the video file after write-in is target video fragment.
After target video fragment is obtained, terminal can preview target video fragment automatically.Alternatively, broadcast in terminalPut during to finish time, terminal jumps to default interface, and then automatic preview target is regarded in the preview window at the default interfaceFrequency fragment.Such as, refer to Fig. 9, terminal can in window 91 automatic preview target video fragment.It should be noted thatWhen playing to finish time, also need to expend certain hour due to terminal synthesize target video fragment, therefore, in the timeIn section, terminal can show the prompt message of " in loading " in the preview window, and this is not limited.Or, terminal is being obtainedTo after target video fragment, the interface comprising preview option can be jumped to, user is clicked on after the preview option, started pre-Look at the target video fragment, the present embodiment is implemented to it and do not limited.
Additionally, after preview target video fragment, if user is satisfied with, user can trigger and preserve the target video pieceSection, and if user is dissatisfied, user can trigger cancellation, and this is dubbed, and the present embodiment is not also limited this.
It should be added that, after target video fragment is obtained, terminal can share the target video fragment,Refer to Figure 10 video dubbing methods can also comprise the following steps:
Step 1001, what target video fragment was shared in reception shares request, shares and asks to include sharing mode.
Wherein, it can be to be shared to target good friend by destinations traffic mode to share mode, or is shared to target platform.
Such as, Figure 11 is refer to, user wants to share target video fragment during to microblogging, and it is micro- that user can apply clickRich 111 click signal, corresponding terminal can receive the click signal, and the click signal is shares request.
Step 1002, receive share request after, share target video fragment according to the mode of sharing.
Terminal receive share request after, can according to this share request in mode of sharing share the target video pieceSection.Such as, with reference to Figure 11, terminal is received after the click signal for clicking on microblogging 111, and terminal can call microblogging interface, is led toThe microblogging interface for calling is crossed to share to microblogging the target video fragment.
In sum, the video dubbing method that the present embodiment is provided, by after receiving and dubbing request, according to dubbingRequest determines initial time and the finish time of the video segment that needs are dubbed, and initial time and knot are in video is playedDuring video segment between the beam moment, the file of dubbing corresponding to the video segment is recorded, intercept video segment, and then according to matching somebody with somebodyThe video segment generation target video fragment that sound file and interception are obtained;Solving terminal in the prior art can only regard to existingFrequency is dubbed, and when existing video includes the fragment that user need not dub, the video after dubbing includes redundancyProblem;Reached terminal can only to need video segment dub, reduce redundancy effect.Further, since userThe video segment of a certain length in being free to as video mixes the voice of oneself needs, and having reached can increase entertainingEffect, improves Consumer's Experience.
Figure 12 is refer to, the structural representation of the video dubbing installation provided it illustrates one embodiment of the invention, such asShown in Figure 12, the video dubbing installation can include:First receiver module 1210, determining module 1220, playing module 1230, recordMolding block 1240, interception module 1250 and synthesis module 1260.
First receiver module 1210, in video display process, request to be dubbed in reception;
Determining module 1220, the initial time and knot of the video segment that needs are dubbed are determined for dubbing request according toThe beam moment;
Playing module 1230, for the initial time for playing the determination of the determining module 1220 and the finish timeBetween video segment;
Module 1240 is recorded, during playing the video segment in the playing module, the video is recordedFile is dubbed corresponding to fragment;
Interception module 1250, for intercepting the video segment in the video;
Synthesis module 1260, for according to the video segment and described dubbing file synthesis target video fragment.
In sum, the video dubbing installation that the present embodiment is provided, by after receiving and dubbing request, according to dubbingRequest determines initial time and the finish time of the video segment that needs are dubbed, and initial time and knot are in video is playedDuring video segment between the beam moment, the file of dubbing corresponding to the video segment is recorded, intercept video segment, and then according to matching somebody with somebodyThe video segment generation target video fragment that sound file and interception are obtained;Solving terminal in the prior art can only regard to existingFrequency is dubbed, and when existing video includes the fragment that user need not dub, the video after dubbing includes redundancyProblem;Reached terminal can only to need video segment dub, reduce redundancy effect.Further, since userThe video segment of a certain length in being free to as video mixes the voice of oneself needs, and having reached can increase entertainingEffect, improves Consumer's Experience.
Based on above-described embodiment provide video dubbing installation, optionally, the determining module 1220, including:
Display unit, for receive it is described dub request after, in the playing progress bar of the video firstPredetermined position shows start-tag, and end-tag is shown in the second predetermined position of the playing progress bar;
Acquiring unit, is the initial time for obtaining the moment corresponding to the start-tag, obtains the endMoment corresponding to label is the finish time.
Optionally, the determining module 1220, also includes:
Processing unit, for receiving the first slip signals for sliding the start-tag, slides the start-tag;And/or, for receiving the second slip signals for sliding the end-tag, slide the end-tag.
Optionally, described device also includes:
Previewing module, for when first slip signals are received, after the start-tag is slided, preview instituteState the frame of video at the position corresponding to start-tag;Or, for when second slip signals are received, sliding instituteAfter stating end-tag, the frame of video at position corresponding to end-tag described in preview.
Optionally, the playing module 1230, including:
Receiving unit, starts to dub request for receiving;
Broadcast unit, for the receiving unit receive it is described start to dub request after, play the piece of videoSection.
Optionally, the synthesis module 1260, including:
Extraction unit, for extracting the image information in the video segment;
Synthesis unit, for synthesizing described image information and the voice messaging dubbed in file, and obtains the meshMark video segment.
Optionally, described device also includes:
Second receiver module, the request of sharing of the target video fragment is shared for receiving, described to share bag in requestInclude the mode of sharing;
Sharing module, for after request is shared described in second receiver module is received, according to the side of sharingFormula shares the target video fragment.
The embodiment of the present invention additionally provides a kind of computer-readable recording medium, and the computer-readable recording medium can beThe computer-readable recording medium included in memory in above-described embodiment;Can also be individualism, without allocating end intoComputer-readable recording medium in end.The computer-readable recording medium storage has one or more than one program, and this oneIndividual or more than one program is used for performing above-mentioned video dubbing method by one or more than one processor.
The block diagram of the terminal 1300 that Figure 13 is provided it illustrates one embodiment of the invention, the terminal can include radio frequency(RF, Radio Frequency) circuit 1301, the memory for including one or more computer-readable recording mediums1302nd, input block 1303, display unit 1304, sensor 1305, voicefrequency circuit 1306, Wireless Fidelity (WiFi, WirelessFidelity) module 1307, include the portion such as or the processor 1308 and power supply 1309 of more than one processing corePart.It will be understood by those skilled in the art that the restriction of the terminal structure shown in Figure 13 not structure paired terminal, can include thanMore or less part is illustrated, or combines some parts, or different part arrangements.Wherein:
RF circuits 1301 can be used to receiving and sending messages or communication process in, the reception and transmission of signal, especially, by base stationAfter downlink information is received, transfer to one or more than one processor 1308 is processed;In addition, will be related to up data is activation toBase station.Generally, RF circuits 1301 include but is not limited to antenna, at least one amplifier, tuner, one or more oscillators,Subscriber identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low-noise amplifier(LNA, Low Noise Amplifier), duplexer etc..Additionally, RF circuits 1301 can also by radio communication and network andOther equipment communicates.The radio communication can use any communication standard or agreement, including but not limited to global system for mobile telecommunicationsSystem (GSM, Global System of Mobile communication), general packet radio service (GPRS, GeneralPacket Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is manyLocation (WCDMA, Wideband Code Division Multiple Access), Long Term Evolution (LTE, Long TermEvolution), Email, Short Message Service (SMS, Short Messaging Service) etc..
Memory 1302 can be used to store software program and module, and processor 1308 is by running storage in memory1302 software program and module, so as to perform various function application and data processing.Memory 1302 can mainly includeStoring program area and storage data field, wherein, the application journey that storing program area can be needed for storage program area, at least one functionSequence (such as sound-playing function, image player function etc.) etc.;Storage data field can be stored and use what is created according to terminalData (such as voice data, phone directory etc.) etc..Additionally, memory 1302 can include high-speed random access memory, may be used alsoWith including nonvolatile memory, for example, at least one disk memory, flush memory device or other volatile solid-statesPart.Correspondingly, memory 1302 can also include Memory Controller, be deposited with providing processor 1308 and input block 1303 pairsThe access of reservoir 1302.
Input block 1303 can be used to receive the numeral or character information of input, and generation is set and function with userThe relevant keyboard of control, mouse, action bars, optics or trace ball signal input.Specifically, in a specific embodimentIn, input block 1303 may include Touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or tactileControl plate, user can be collected thereon or neighbouring touch operation (such as user use any suitable objects such as finger, stylus orOperation of the annex on Touch sensitive surface or near Touch sensitive surface), and corresponding connection dress is driven according to formula set in advancePut.Optionally, Touch sensitive surface may include two parts of touch detecting apparatus and touch controller.Wherein, touch detecting apparatus inspectionThe touch orientation of user is surveyed, and detects the signal that touch operation brings, transmit a signal to touch controller;Touch controller fromTouch information is received on touch detecting apparatus, and is converted into contact coordinate, then give processor 1308, and can reception processingOrder that device 1308 is sent simultaneously is performed.Furthermore, it is possible to many using resistance-type, condenser type, infrared ray and surface acoustic wave etc.Type realizes Touch sensitive surface.Except Touch sensitive surface, input block 1303 can also include other input equipments.Specifically, itsHis input equipment can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), trackOne or more in ball, mouse, action bars etc..
Display unit 1304 can be used for show by user input information or be supplied to user information and terminal it is eachGraphical user interface is planted, these graphical user interface can be made up of figure, text, icon, video and its any combination.It is aobviousShow that unit 1304 may include display panel, optionally, liquid crystal display (LCD, Liquid Crystal can be usedDisplay), the form such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configures display surfacePlate.Further, Touch sensitive surface can cover display panel, when Touch sensitive surface is detected thereon or after neighbouring touch operation,Processor 1308 is sent to determine the type of touch event, with preprocessor 1308 according to the type of touch event in display surfaceCorresponding visual output is provided on plate.Although in fig. 13, Touch sensitive surface with display panel is come as two independent partsRealize input and input function, but in some embodiments it is possible to by Touch sensitive surface and display panel it is integrated and realize inputAnd output function.
Terminal may also include at least one sensor 1305, such as optical sensor, motion sensor and other sensors.Specifically, optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient lightLight and shade adjust the brightness of display panel, proximity transducer can close display panel and/or the back of the body when terminal is moved in one's earLight.As one kind of motion sensor, (generally three axles) acceleration in the detectable all directions of Gravity accelerometerSize, can detect that size and the direction of gravity when static, can be used for recognize mobile phone attitude application (such as horizontal/vertical screen switching,Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Be can also configure as terminalThe other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared ray sensor, will not be repeated here.
Voicefrequency circuit 1306, loudspeaker, microphone can provide the COBBAIF between user and terminal.Voicefrequency circuit 1306Electric signal after the voice data conversion that will can be received, is transferred to loudspeaker, and being converted to voice signal by loudspeaker exports;SeparatelyOn the one hand, the voice signal of collection is converted to electric signal by microphone, and voice data is converted to after being received by voicefrequency circuit 1306,After voice data output processor 1308 is processed again, through RF circuits 1301 being sent to such as another terminal, or by audioData output is to memory 1302 so as to further treatment.Voicefrequency circuit 1306 is also possible that earphone jack, to provide peripheral hardwareThe communication of earphone and terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronicses postal by WiFi module 1307Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and has accessed.Although Figure 13 showsWiFi module 1307, but it is understood that, it is simultaneously not belonging to must be configured into for terminal, can not change as needed completelyBecome in the essential scope of invention and omit.
Processor 1308 is the control centre of terminal, using various interfaces and the various pieces of connection whole mobile phone,By running or performing software program and/or module of the storage in memory 1302, and storage is called in memory 1302Interior data, perform the various functions and processing data of terminal, so as to carry out integral monitoring to mobile phone.Optionally, processor1309 may include one or more processing cores;Preferably, processor 1308 can integrated application processor and modulation /demodulation treatmentDevice, wherein, application processor mainly processes operating system, user interface and application program etc., and modem processor is mainly locatedReason radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1308.
Terminal also includes the power supply 1309 (such as battery) powered to all parts, it is preferred that power supply can be by power supplyManagement system is logically contiguous with processor 1309, so as to realize management charging, electric discharge and power consumption pipe by power-supply management systemThe functions such as reason.Power supply 1309 can also include one or more direct current or AC power, recharging system, power failureThe random component such as detection circuit, power supply changeover device or inverter, power supply status indicator.
Although not shown, terminal can also will not be repeated here including camera, bluetooth module etc..Specifically in this implementationIn example, the processor 1308 in terminal can run one or more programmed instruction of the storage in memory 1302, fromAnd realize video dubbing method provided in above-mentioned each embodiment of the method.
It should be appreciated that it is used in the present context, unless context clearly supports exception, singulative "It is individual " (" a ", " an ", " the ") be intended to also include plural form.It is to be further understood that "and/or" used herein isFinger includes any of or more than one project listed in association and is possible to combine.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can be by hardwareTo complete, it is also possible to instruct the hardware of correlation to complete by program, described program can be stored in a kind of computer-readableIn storage medium, storage medium mentioned above can be read-only storage, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all it is of the invention spirit andWithin principle, any modification, equivalent substitution and improvements made etc. should be included within the scope of the present invention.

Claims (14)

CN201710220247.8A2017-04-062017-04-06Video dubbing method and devicePendingCN106911900A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201710220247.8ACN106911900A (en)2017-04-062017-04-06Video dubbing method and device
PCT/CN2018/080657WO2018184488A1 (en)2017-04-062018-03-27Video dubbing method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710220247.8ACN106911900A (en)2017-04-062017-04-06Video dubbing method and device

Publications (1)

Publication NumberPublication Date
CN106911900Atrue CN106911900A (en)2017-06-30

Family

ID=59193993

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710220247.8APendingCN106911900A (en)2017-04-062017-04-06Video dubbing method and device

Country Status (2)

CountryLink
CN (1)CN106911900A (en)
WO (1)WO2018184488A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107809666A (en)*2017-10-262018-03-16费非Voice data merging method, device storage medium and processor
CN107872620A (en)*2017-11-222018-04-03北京小米移动软件有限公司 Video recording method and device
CN108024073A (en)*2017-11-302018-05-11广州市百果园信息技术有限公司Video editing method, device and intelligent mobile terminal
CN108038185A (en)*2017-12-082018-05-15广州市百果园信息技术有限公司Video dynamic edit methods, device and intelligent mobile terminal
WO2018130173A1 (en)*2017-01-162018-07-19腾讯科技(深圳)有限公司Dubbing method, terminal device, server and storage medium
CN108337558A (en)*2017-12-262018-07-27努比亚技术有限公司Audio and video clipping method and terminal
CN108600825A (en)*2018-07-122018-09-28北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
WO2018184488A1 (en)*2017-04-062018-10-11腾讯科技(深圳)有限公司Video dubbing method and device
CN109361954A (en)*2018-11-022019-02-19腾讯科技(深圳)有限公司Method for recording, device, storage medium and the electronic device of video resource
CN110366032A (en)*2019-08-092019-10-22腾讯科技(深圳)有限公司Video data handling procedure, device and video broadcasting method, device
CN110868633A (en)*2019-11-272020-03-06维沃移动通信有限公司 A video processing method and electronic device
CN111212321A (en)*2020-01-102020-05-29上海摩象网络科技有限公司Video processing method, device, equipment and computer storage medium
CN111741231A (en)*2020-07-232020-10-02北京字节跳动网络技术有限公司Video dubbing method, device, equipment and storage medium
CN112565905A (en)*2020-10-242021-03-26北京博睿维讯科技有限公司Image locking operation method, system, intelligent terminal and storage medium device
CN112954390A (en)*2021-01-262021-06-11北京有竹居网络技术有限公司Video processing method, device, storage medium and equipment
CN113630630A (en)*2021-08-092021-11-09咪咕数字传媒有限公司 A method, device and equipment for processing video commentary dubbing information
CN114338579A (en)*2021-12-292022-04-12南京大众书网图书文化有限公司Method, apparatus, medium and program product for dubbing
CN114666516A (en)*2022-02-172022-06-24海信视像科技股份有限公司 Display device and streaming media file synthesis method
WO2022179530A1 (en)*2021-02-242022-09-01花瓣云科技有限公司Video dubbing method, related device, and computer readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109413342B (en)2018-12-212021-01-08广州酷狗计算机科技有限公司Audio and video processing method and device, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101217638A (en)*2007-12-282008-07-09深圳市迅雷网络技术有限公司 Method, system and device for segmented downloading of video files
CN104333802A (en)*2013-12-132015-02-04乐视网信息技术(北京)股份有限公司Video playing method and video player
CN105847966A (en)*2016-03-292016-08-10乐视控股(北京)有限公司Terminal and video capturing and sharing method
CN105959773A (en)*2016-04-292016-09-21魔方天空科技(北京)有限公司Multimedia file processing method and device
CN106293347A (en)*2016-08-162017-01-04广东小天才科技有限公司Human-computer interaction learning method and device and user terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2006196091A (en)*2005-01-142006-07-27Matsushita Electric Ind Co Ltd Video / audio signal recording and playback device
CN106911900A (en)*2017-04-062017-06-30腾讯科技(深圳)有限公司Video dubbing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101217638A (en)*2007-12-282008-07-09深圳市迅雷网络技术有限公司 Method, system and device for segmented downloading of video files
CN104333802A (en)*2013-12-132015-02-04乐视网信息技术(北京)股份有限公司Video playing method and video player
CN105847966A (en)*2016-03-292016-08-10乐视控股(北京)有限公司Terminal and video capturing and sharing method
CN105959773A (en)*2016-04-292016-09-21魔方天空科技(北京)有限公司Multimedia file processing method and device
CN106293347A (en)*2016-08-162017-01-04广东小天才科技有限公司Human-computer interaction learning method and device and user terminal

Cited By (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2018130173A1 (en)*2017-01-162018-07-19腾讯科技(深圳)有限公司Dubbing method, terminal device, server and storage medium
WO2018184488A1 (en)*2017-04-062018-10-11腾讯科技(深圳)有限公司Video dubbing method and device
CN107809666A (en)*2017-10-262018-03-16费非Voice data merging method, device storage medium and processor
CN107872620B (en)*2017-11-222020-06-02北京小米移动软件有限公司 Video recording method and device, and computer-readable storage medium
CN107872620A (en)*2017-11-222018-04-03北京小米移动软件有限公司 Video recording method and device
CN108024073A (en)*2017-11-302018-05-11广州市百果园信息技术有限公司Video editing method, device and intelligent mobile terminal
US11935564B2 (en)2017-11-302024-03-19Bigo Technology Pte. Ltd.Video editing method and intelligent mobile terminal
CN108038185A (en)*2017-12-082018-05-15广州市百果园信息技术有限公司Video dynamic edit methods, device and intelligent mobile terminal
CN108337558A (en)*2017-12-262018-07-27努比亚技术有限公司Audio and video clipping method and terminal
CN108600825A (en)*2018-07-122018-09-28北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
US11030987B2 (en)2018-07-122021-06-08Beijing Microlive Vision Technology Co., Ltd.Method for selecting background music and capturing video, device, terminal apparatus, and medium
CN108600825B (en)*2018-07-122019-10-25北京微播视界科技有限公司Select method, apparatus, terminal device and the medium of background music shooting video
CN109361954A (en)*2018-11-022019-02-19腾讯科技(深圳)有限公司Method for recording, device, storage medium and the electronic device of video resource
CN109361954B (en)*2018-11-022021-03-26腾讯科技(深圳)有限公司Video resource recording method and device, storage medium and electronic device
CN110366032A (en)*2019-08-092019-10-22腾讯科技(深圳)有限公司Video data handling procedure, device and video broadcasting method, device
CN110868633A (en)*2019-11-272020-03-06维沃移动通信有限公司 A video processing method and electronic device
CN111212321A (en)*2020-01-102020-05-29上海摩象网络科技有限公司Video processing method, device, equipment and computer storage medium
JP2023506587A (en)*2020-07-232023-02-16北京字節跳動網絡技術有限公司 Video dubbing method, device, equipment and storage medium
KR102523768B1 (en)*2020-07-232023-04-20베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Video dubbing method, device, apparatus and storage medium
AU2021312196B2 (en)*2020-07-232023-07-27Beijing Bytedance Network Technology Co., Ltd.Video dubbing method. device, apparatus, and storage medium
WO2022017451A1 (en)*2020-07-232022-01-27北京字节跳动网络技术有限公司Video dubbing method. device, apparatus, and storage medium
CN111741231B (en)*2020-07-232022-02-22北京字节跳动网络技术有限公司Video dubbing method, device, equipment and storage medium
CN111741231A (en)*2020-07-232020-10-02北京字节跳动网络技术有限公司Video dubbing method, device, equipment and storage medium
US11817127B2 (en)2020-07-232023-11-14Beijing Bytedance Network Technology Co., Ltd.Video dubbing method, apparatus, device, and storage medium
JP7344395B2 (en)2020-07-232023-09-13北京字節跳動網絡技術有限公司 Video dubbing methods, devices, equipment and storage media
KR20220119743A (en)*2020-07-232022-08-30베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Video dubbing method, device, apparatus and storage medium
CN112565905B (en)*2020-10-242022-07-22北京博睿维讯科技有限公司Image locking operation method, system, intelligent terminal and storage medium
CN112565905A (en)*2020-10-242021-03-26北京博睿维讯科技有限公司Image locking operation method, system, intelligent terminal and storage medium device
CN112954390A (en)*2021-01-262021-06-11北京有竹居网络技术有限公司Video processing method, device, storage medium and equipment
CN112954390B (en)*2021-01-262023-05-09北京有竹居网络技术有限公司 Video processing method, device, storage medium and equipment
US12192540B2 (en)2021-01-262025-01-07Beijing Youzhuju Network Technology Co., Ltd.Video processing method and apparatus, storage medium, and device
WO2022179530A1 (en)*2021-02-242022-09-01花瓣云科技有限公司Video dubbing method, related device, and computer readable storage medium
CN113630630B (en)*2021-08-092023-08-15咪咕数字传媒有限公司 Method, device and equipment for processing video commentary dubbing information
CN113630630A (en)*2021-08-092021-11-09咪咕数字传媒有限公司 A method, device and equipment for processing video commentary dubbing information
CN114338579B (en)*2021-12-292024-02-09南京大众书网图书文化有限公司Method, equipment and medium for dubbing
CN114338579A (en)*2021-12-292022-04-12南京大众书网图书文化有限公司Method, apparatus, medium and program product for dubbing
CN114666516A (en)*2022-02-172022-06-24海信视像科技股份有限公司 Display device and streaming media file synthesis method

Also Published As

Publication numberPublication date
WO2018184488A1 (en)2018-10-11

Similar Documents

PublicationPublication DateTitle
CN106911900A (en)Video dubbing method and device
US10841661B2 (en)Interactive method, apparatus, and system in live room
CN106101736B (en)A kind of methods of exhibiting and system of virtual present
JP6186443B2 (en) Recording method, reproducing method, apparatus, terminal, system, program, and recording medium
CN105788612B (en)A kind of method and apparatus detecting sound quality
CN106101756A (en)Barrage display packing, barrage adding method, Apparatus and system
CN107438200A (en)The method and apparatus of direct broadcasting room present displaying
CN104967900A (en)Video generating method and video generating device
CN106598996A (en)Multi-media poster generation method and device
CN105828145A (en)Interaction method and interaction device
CN106331826A (en)Method, device and system for setting live broadcast template and video mode
CN104796743A (en)Content item display system, method and device
CN106488296B (en)A kind of method and apparatus showing video barrage
CN104036536B (en)The generation method and device of a kind of stop-motion animation
CN103699309B (en)A kind of method for recording of synchronization video, device and mobile terminal
CN107333162A (en)A kind of method and apparatus for playing live video
CN108038185A (en)Video dynamic edit methods, device and intelligent mobile terminal
CN105606117A (en)Navigation prompting method and navigation prompting apparatus
CN104159136A (en)Interaction information acquisition method, terminal, server and system
CN108024123A (en)A kind of live video processing method, device, terminal device and server
CN103581762A (en)Method, device and terminal equipment for playing network videos
CN108184143A (en)Obtain the method and device of resource
CN103458286A (en)Television channel switching method and device
CN105608095B (en)Multimedia playing method and device and mobile terminal
CN106506437A (en)A kind of audio data processing method, and equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20170630


[8]ページ先頭

©2009-2025 Movatter.jp