Summary of the invention
In view of this, the present disclosure proposes a kind of video clipping method and devices.
According to the one side of the disclosure, a kind of video clipping method is provided, comprising:
Candidate segment is determined from video to be processed;
Determine the scene conversion time point in the video to be processed;
According to the scene conversion time point, the time range of the corresponding editing segment of the candidate segment is determined.
In one possible implementation, candidate segment is determined from video to be processed, comprising:
Watching focus segment and/or the segment comprising specifying object are determined from the video to be processed;
According to the watching focus segment and/or the segment comprising specified object, candidate segment is determined.
In one possible implementation, according to the watching focus segment and/or the segment comprising specified object, reallyDetermine candidate segment, comprising:
If watching focus segment with comprising specifying the segment of object to there is the Chong Die period, merge the watching focus segment of overlapping withSegment comprising specifying object, determines candidate segment.
In one possible implementation, the scene conversion time point in the video to be processed is determined, comprising:
Determine the Shot change time point in the video to be processed;
Determine the time range without subtitle in the video to be processed;
Using the Shot change time point in the time range of no subtitle as scene conversion time point.
In one possible implementation, according to the scene conversion time point, determine that the candidate segment is correspondingThe time range of editing segment, comprising:
If the duration of the candidate segment is greater than the first duration, the determining and candidate segment in the candidate segmentAt the beginning of put the first duration of distance first time point;
It is cut using the last one scene conversion time point before the first time point as the candidate segment is correspondingCollect the end time point of segment.
In one possible implementation, according to the scene conversion time point, determine that the candidate segment is correspondingThe time range of editing segment, comprising:
If the duration of the candidate segment is greater than or equal to the second duration and is less than or equal to the first duration, in the timeIn selected episode it is determining with the candidate segment at the beginning of put the second time point of the second duration of distance, and with the candidateThe third time point of the first duration of distance is put at the beginning of segment, wherein second duration is less than first duration, instituteStated for the second time point earlier than the third time point;
According to the scene conversion time point between second time point and the third time point, the candidate piece is determinedThe end time point of the corresponding editing segment of section.
In one possible implementation, turned according to the scene between second time point and the third time pointTime point is changed, determines the end time point of the corresponding editing segment of the candidate segment, comprising:
By in the scene conversion time point between second time point and the third time point with the candidate segmentEnd time point distance the smallest scene conversion time point as the corresponding editing segment of the candidate segment at the end ofBetween point.
In one possible implementation, according to the scene conversion time point, determine that the candidate segment is correspondingThe time range of editing segment, comprising:
If the duration of the candidate segment is less than the second duration, it is determined that with distance is put at the beginning of the candidate segment4th time point of the second duration, wherein the 4th time point is later than point at the beginning of the candidate segment;
Using first scene conversion time point after the 4th time point as the corresponding editing of the candidate segmentThe end time point of segment.
In one possible implementation, the method also includes:
Using the ratio of the greatest expected duration of target video and the number of the candidate segment as the first duration.
In one possible implementation, the method also includes:
Using the ratio of the minimum expected duration of target video and the number of the candidate segment as the second duration.
In one possible implementation, the time range for determining the corresponding editing segment of the candidate segment itAfterwards, the method also includes:
If the number of candidate segment be it is multiple, merge the corresponding editing segment of each candidate segment, obtain target video;
If the number of candidate segment is one, regarded using the corresponding editing segment of the candidate segment as the targetFrequently.
According to another aspect of the present disclosure, a kind of video clipping device is provided, comprising:
First determining module, for determining candidate segment from video to be processed;
Second determining module, for determining the scene conversion time point in the video to be processed;
Third determining module, for determining the corresponding compilation film of the candidate segment according to the scene conversion time pointThe time range of section.
In one possible implementation, first determining module includes:
First determines submodule, for determining watching focus segment from the video to be processed and/or comprising specifying objectSegment;
Second determines submodule, for determining and waiting according to the watching focus segment and/or the segment comprising specified objectSelected episode.
In one possible implementation, described second determine that submodule is used for:
If watching focus segment with comprising specifying the segment of object to there is the Chong Die period, merge the watching focus segment of overlapping withSegment comprising specifying object, determines candidate segment.
In one possible implementation, second determining module includes:
Third determines submodule, for determining the Shot change time point in the video to be processed;
4th determines submodule, for determining the time range without subtitle in the video to be processed;
5th determines submodule, for using the Shot change time point in the time range of no subtitle as scene conversionTime point.
In one possible implementation, the third determining module includes:
6th determines submodule, if the duration for the candidate segment is greater than the first duration, in the candidate segmentThe first time point of the first duration of distance is put at the beginning of middle determination and the candidate segment;
7th determines submodule, for using the last one scene conversion time point before the first time point as instituteState the end time point of the corresponding editing segment of candidate segment.
In one possible implementation, the third determining module includes:
8th determines submodule, if the duration for the candidate segment is greater than or equal to the second duration and is less than or equal toFirst duration, then when putting the second of the second duration of distance at the beginning of the determining and candidate segment in the candidate segmentBetween point, and with the candidate segment at the beginning of put third time point of the first duration of distance, wherein second durationLess than first duration, second time point is earlier than the third time point;
9th determines submodule, when for according to scene conversion between second time point and the third time pointBetween point, determine the end time point of the corresponding editing segment of the candidate segment.
In one possible implementation, the described 9th determine that submodule is used for:
By in the scene conversion time point between second time point and the third time point with the candidate segmentEnd time point distance the smallest scene conversion time point as the corresponding editing segment of the candidate segment at the end ofBetween point.
In one possible implementation, the third determining module includes:
Tenth determines submodule, if the duration for the candidate segment is less than the second duration, it is determined that with the candidateThe 4th time point of the second duration of distance is put at the beginning of segment, wherein the 4th time point is later than the candidate segmentAt the beginning of point;
11st determines submodule, for using first scene conversion time point after the 4th time point as instituteState the end time point of the corresponding editing segment of candidate segment.
In one possible implementation, described device further include:
4th determining module, for making the ratio of the greatest expected duration of target video and the number of the candidate segmentFor the first duration.
In one possible implementation, described device further include:
5th determining module, for making the ratio of the minimum expected duration of target video and the number of the candidate segmentFor the second duration.
In one possible implementation, described device further include:
6th determining module, if for candidate segment number be it is multiple, merge the corresponding editing of each candidate segmentSegment obtains target video;If the number of candidate segment is one, using the corresponding editing segment of the candidate segment as instituteState target video.
According to another aspect of the present disclosure, a kind of video clipping device is provided, comprising: processor;It is handled for storageThe memory of device executable instruction;Wherein, the processor is configured to executing the above method.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon withComputer program instructions, wherein the computer program instructions realize the above method when being executed by processor.
According to the video clipping method of all aspects of this disclosure and device by determining candidate segment from video to be processed,It determines the scene conversion time point in video to be processed, and according to scene conversion time point, determines the corresponding editing of candidate segmentThe time range of segment carries out video clipping hereby based on scene conversion time point, so as to guarantee that editing obtains from videoThe integrality of the video content of the editing segment arrived avoids bringing hopping sense, truncation sense to user.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will becomeIt is clear.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawingAppended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, removeIt non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure.It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, forMethod, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 shows the flow chart of the video clipping method according to one embodiment of the disclosure.As shown in Figure 1, this method includesStep S11 to step S13.
In step s 11, candidate segment is determined from video to be processed.
In the present embodiment, video to be processed can be any required video for carrying out video clipping.For example, view to be processedFrequency can be film video or TV play video etc..
It in one possible implementation, can be using the highest N number of segment of playback volume in video to be processed as candidateSegment, wherein N is positive integer.
In step s 12, the scene conversion time point in video to be processed is determined.
Wherein, the scene conversion time point in video to be processed can refer to that the frame of scene conversion in video to be processed is correspondingTime point.
In step s 13, according to scene conversion time point, the time range of the corresponding editing segment of candidate segment is determined.
In one possible implementation, according to scene conversion time point, the corresponding editing segment of candidate segment is determinedTime range, comprising: according to scene conversion time point, determine the end time point of the corresponding editing segment of candidate segment.
It in one possible implementation, can be corresponding as the candidate segment by point at the beginning of candidate segmentPoint at the beginning of editing segment.
It, can be by the last one scene before being put at the beginning of candidate segment in alternatively possible implementationConversion time point as the corresponding editing segment of the candidate segment at the beginning of point.
It, can be by the last one camera lens before being put at the beginning of candidate segment in alternatively possible implementationSwitching time point as the corresponding editing segment of the candidate segment at the beginning of point.
In alternatively possible implementation, first scene after putting at the beginning of candidate segment can be turnedPoint at the beginning of time point is changed as the corresponding editing segment of the candidate segment.
In alternatively possible implementation, first camera lens after putting at the beginning of candidate segment can be cutPoint at the beginning of time point is changed as the corresponding editing segment of the candidate segment.
The present embodiment determines the scene conversion time in video to be processed by determining candidate segment from video to be processedPoint, and according to scene conversion time point, the time range of the corresponding editing segment of candidate segment is determined, hereby based on scene conversionTime point carries out video clipping, so as to guarantee editing obtains from video editing segment video content integrality,It avoids bringing hopping sense, truncation sense to user.
Fig. 2 shows the illustrative flow charts according to the video clipping method step S12 of one embodiment of the disclosure.Such as figureShown in 2, step S12 may include step S121 to step S123.
In step S121, the Shot change time point in video to be processed is determined.
In the present embodiment, the Shot change time point in video to be processed can be determined using the relevant technologies.For example, canTo determine the Shot change time point in video to be processed using FFmpeg.In one possible implementation, Shot changeTime point, corresponding video frame was key frame.
In step S122, the time range in video to be processed without subtitle is determined.
In the present embodiment, it according to the video frame for not including subtitle in video to be processed, can determine in video to be processedTime range without subtitle.
It should be noted that the present embodiment is not defined the execution sequence of step S121 and step S122, as long as stepRapid S121 and step S122 is executed before step S123.For example, step S121 can be first carried out, execute step againS122 can also first carry out step S122, execute step S121 again.
In step S123, using the Shot change time point in the time range of no subtitle as the scene conversion timePoint.
Fig. 7 shows the schematic diagram of the video clipping method according to one embodiment of the disclosure.As shown in fig. 7, if a certain camera lensSwitching time point, then can be using the Shot change time point as scene conversion time point in the time range of no subtitle.
Fig. 3 shows an illustrative flow chart of the video clipping method step S13 according to one embodiment of the disclosure.Such as figureShown in 3, step S13 may include step S131 and step S132.
In step S131, if the duration of candidate segment is greater than the first duration, the determining and candidate piece in candidate segmentThe first time point of the first duration of distance is put at the beginning of section.
It is corresponding using the last one scene conversion time point before first time point as candidate segment in step S132Editing segment end time point.
In Fig. 7, maximum length indicates that the first duration, minimum length indicate the second duration.First candidate segment whenIt is long to be greater than the first duration, then it can be between the at the beginning of point of determining and first candidate segment in first candidate segmentDistance be the first duration first time point, and can using the last one scene conversion time point before first time point asThe end time point of the corresponding editing segment of candidate segment.
In this example, if the duration of candidate segment is greater than the first duration, the determining and candidate segment in candidate segmentAt the beginning of put the first time point of the first duration of distance, and by the last one scene conversion time before first time pointEnd time point of the point as the corresponding editing segment of candidate segment, when thus, it is possible to avoid the duration of editing segment from being more than firstIt is long, it is more than greatest expected duration so as to avoid the duration of the target video obtained according to editing segment, and turned based on sceneTime point progress video clipping is changed, can guarantee the integrality of the video content from editing segment, avoid bringing jump to userSense, truncation sense.
Fig. 4 shows the flow chart of the another exemplary of the video clipping method step S13 according to one embodiment of the disclosure.Such asShown in Fig. 4, step S13 may include step S133 and step S134.
In step S133, if the duration of candidate segment is greater than or equal to the second duration and is less than or equal to the first duration,It is then determining with the second time point for putting the second duration of distance at the beginning of candidate segment in candidate segment, and with candidate pieceThe third time point of the first duration of distance is put at the beginning of section, wherein for the second duration less than the first duration, the second time point is earlyIn third time point.
In step S134, according to the scene conversion time point between the second time point and third time point, determine candidateThe end time point of the corresponding editing segment of segment.
In one possible implementation, according to the scene conversion time between the second time point and third time pointPoint determines the end time point of the corresponding editing segment of candidate segment, comprising: will be between the second time point and third time pointIn scene conversion time point at a distance from the end time of candidate segment point the smallest scene conversion time point as candidate segmentThe end time point of corresponding editing segment.It as shown with 7, can be by the second time point and third for third candidate segmentIn scene conversion time point between time point at a distance from the end time point of the candidate segment the smallest scene conversion timeEnd time point of the point as the corresponding editing segment of the candidate segment.
In alternatively possible implementation, if the duration of candidate segment is greater than or equal to the second duration and is less than or waitsIn the first duration, then using the last one scene conversion time point before the end time of candidate segment point as candidate's pieceThe end time point of the corresponding editing segment of section.As shown in fig. 7, for third candidate segment and the 4th candidate segment, it canIt is cut using the last one scene conversion time point before the end time point by the candidate segment as the candidate segment is correspondingCollect the end time point of segment.
Fig. 5 shows the flow chart of the another exemplary of the video clipping method step S13 according to one embodiment of the disclosure.Such asShown in Fig. 5, step S13 may include step S135 and step S136.
In step S135, if the duration of candidate segment is less than the second duration, it is determined that at the beginning of candidate segment4th time point of point the second duration of distance, wherein the 4th time point was later than point at the beginning of candidate segment.
It is in step S136, first scene conversion time point after the 4th time point is corresponding as candidate segmentThe end time point of editing segment.
In this example, if the duration of candidate segment is less than the second duration, it is determined that with point at the beginning of candidate segment4th time point of the second duration of distance, wherein the 4th time point was later than point at the beginning of candidate segment, and by the 4th timeEnd time point of first scene conversion time point as the corresponding editing segment of candidate segment after point, thus, it is possible to keep awayExempt from the duration of editing segment less than the second duration, so as to avoid the duration of the target video obtained according to editing segment from being less thanMinimum expected duration, and video clipping is carried out based on scene conversion time point, it can guarantee from the video content of editing segmentIntegrality avoids bringing hopping sense, truncation sense to user.
In alternatively possible implementation, if the duration of candidate segment less than the second duration, by the candidate segmentAs the corresponding editing segment of the candidate segment.As shown in fig. 7, the duration of second candidate segment then may be used less than the second durationUsing by the candidate segment as the corresponding editing segment of the candidate segment.
In one possible implementation, this method further include: by the greatest expected duration of target video and candidate pieceThe ratio of the number of section is as the first duration.For example, a length of 5 minutes when the greatest expected of target video, the number of candidate segmentBe 5, then first when it is 60 seconds a length of.
The implementation passes through using the ratio of the greatest expected duration of target video and the number of candidate segment as firstDuration, the duration that can be avoided the target video of generation is more than greatest expected duration.
In one possible implementation, this method further include: by the minimum expected duration of target video and candidate pieceThe ratio of the number of section is as the second duration.For example, a length of 3 minutes when the minimum expected of target video, the number of candidate segmentBe 5, then second when it is 36 seconds a length of.
The implementation passes through using the ratio of the minimum expected duration of target video and the number of candidate segment as secondDuration, the duration that can be avoided the target video of generation are less than minimum expected duration.
In one possible implementation, candidate segment is determined from video to be processed, comprising: from video to be processedDetermine watching focus segment and the segment comprising specifying object;According to watching focus segment and the segment comprising specifying object, candidate's piece is determinedSection.
In alternatively possible implementation, candidate segment is determined from video to be processed, comprising: from video to be processedMiddle determining watching focus segment;According to watching focus segment, candidate segment is determined.
In alternatively possible implementation, candidate segment is determined from video to be processed, comprising: from video to be processedMiddle determination includes the segment of specified object;According to the segment comprising specifying object, candidate segment is determined.
Fig. 6 shows an illustrative flow chart of the video clipping method step S11 according to one embodiment of the disclosure.Such as figureShown in 6, step S11 may include step S111 and step S112.
In step S111, watching focus segment and/or the segment comprising specifying object are determined from video to be processed.
In one possible implementation, the watching focus segment of video to be processed can exist for the uploader of video to be processedThe segment of the splendid contents marked in video to be processed.
In alternatively possible implementation, can using the highest M segment of playback volume in video to be processed as toHandle the watching focus segment of video, wherein M is positive integer.
It should be noted that determining watching focus segment from video to be processed although describing with two above implementationMode is as above, it is understood by one of ordinary skill in the art that the disclosure answer it is without being limited thereto.Those skilled in the art can be according to realityApplication scenarios demand and/or personal preference flexible setting determine the mode of watching focus segment from video to be processed.
In one possible implementation, specified object may include the object determining according to editing request.In the realityIn existing mode, user can determine specified object according to actual editing demand.
In alternatively possible implementation, specified object may include popular object.For example, popular object can wrapInclude hot star etc..
In step S112, according to watching focus segment and/or the segment comprising specifying object, candidate segment is determined.
In one possible implementation, according to watching focus segment and the segment comprising specifying object, candidate segment is determined,If including: watching focus segment with comprising specifying the segment of object there are the Chong Die period, merge the watching focus segment and packet of overlappingSegment containing specified object, determines candidate segment.For example, the time range of a certain watching focus segment is 5 seconds to 60 seconds, it is a certain to includeThe time range of the segment of specified object is 20 to 80 seconds, then there are Chong Die with the segment comprising specified object is somebody's turn to do for the watching focus segmentPeriod.Merge the watching focus segment and should determine that the time range of candidate segment was 5 to 80 comprising the segment of specified objectSecond.
In one possible implementation, according to watching focus segment and the segment comprising specifying object, candidate segment is determined,The overlapping period is not present with each segment comprising specified object if including: a certain watching focus segment, this is seen into spot filmDuan Zuowei candidate segment.
In one possible implementation, according to watching focus segment and the segment comprising specifying object, candidate segment is determined,It include to refer to by this if including: that the overlapping period is not present with each watching focus segment in a certain segment comprising specified objectThe segment of object is determined as candidate segment.
For the present embodiment according to watching focus segment and comprising specifying the segment of object to determine candidate segment, thus, it is possible to solve watching focusThe problem of lazy weight of segment.
It in one possible implementation, should after the time range for determining the corresponding editing segment of candidate segmentMethod further include: if the number of candidate segment be it is multiple, merge the corresponding editing segment of each candidate segment, obtain target viewFrequently;If the number of candidate segment is one, using the corresponding editing segment of candidate segment as target video.
It, can also be using the corresponding editing segment of each candidate segment as mesh in alternatively possible implementationMark video.
Fig. 8 shows the block diagram of the video clipping device according to one embodiment of the disclosure.As shown in figure 8, the device includes:One determining module 81, for determining candidate segment from video to be processed;Second determining module 82, for determining video to be processedIn scene conversion time point;Third determining module 83, for determining that candidate segment is corresponding and cutting according to scene conversion time pointCollect the time range of segment.
Fig. 9 shows an illustrative block diagram of the video clipping device according to one embodiment of the disclosure.It is as shown in Figure 9:
In one possible implementation, the first determining module 81 include: first determine submodule 811, for toIt handles and determines watching focus segment box/or the segment comprising specifying object in video;Second determines submodule 812, for according to watching focusSegment and/or the segment comprising specifying object, determine candidate segment.
In one possible implementation, second determine submodule 812 be used for: if watching focus segment with comprising specify objectSegment exist overlapping period, then merge overlapping watching focus segment and comprising specify object segment, determine candidate segment.
In one possible implementation, the second determining module 82 includes: that third determines submodule 821, for determiningShot change time point in video to be processed;4th determines submodule 822, for determine in video to be processed without subtitle whenBetween range;5th determines submodule 823, for turning the Shot change time point in the time range of no subtitle as sceneChange time point.
In one possible implementation, third determining module 83 includes: the 6th determining submodule 831, if for waitingThe duration of selected episode is greater than the first duration, then puts the first duration of distance at the beginning of determining and candidate segment in candidate segmentFirst time point;7th determines submodule 832, for making the last one scene conversion time point before first time pointFor the end time point of the corresponding editing segment of candidate segment.
In one possible implementation, third determining module 83 includes: the 8th determining submodule 833, if for waitingThe duration of selected episode is greater than or equal to the second duration and is less than or equal to the first duration, then the determining and candidate piece in candidate segmentThe second time point of the second duration of distance is put at the beginning of section, and with candidate segment at the beginning of put the first duration of distanceThird time point, wherein the second duration is less than the first duration, and the second time point is earlier than third time point;9th determines submoduleBlock 834, for determining that candidate segment is corresponding and cutting according to the scene conversion time point between the second time point and third time pointCollect the end time point of segment.
In one possible implementation, the 9th determine that submodule 834 is used for: by the second time point and third time pointBetween scene conversion time point at a distance from the end time point of candidate segment the smallest scene conversion time point as waitingThe end time point of the corresponding editing segment of selected episode.
In one possible implementation, third determining module 83 includes: the tenth determining submodule 835, if for waitingThe duration of selected episode is less than the second duration, it is determined that with the 4th time for putting the second duration of distance at the beginning of candidate segmentPoint, wherein the 4th time point was later than point at the beginning of candidate segment;11st determines submodule 836, is used for the 4th timeEnd time point of first scene conversion time point as the corresponding editing segment of candidate segment after point.
In one possible implementation, the device further include: the 4th determining module 84, for by target video mostThe ratio of the number of big expected duration and candidate segment is as the first duration.
In one possible implementation, the device further include: the 5th determining module 85, for by target video mostThe ratio of the number of small expected duration and candidate segment is as the second duration.
In one possible implementation, the device further include: the 6th determining module 86, if for candidate segmentIt is multiple for counting, then merges the corresponding editing segment of each candidate segment, obtain target video;If the number of candidate segment is oneIt is a, then using the corresponding editing segment of candidate segment as target video.
The present embodiment determines the scene conversion time in video to be processed by determining candidate segment from video to be processedPoint, and according to scene conversion time point, the time range of the corresponding editing segment of candidate segment is determined, hereby based on scene conversionTime point carries out video clipping, so as to guarantee editing obtains from video editing segment video content integrality,It avoids bringing hopping sense, truncation sense to user.
Figure 10 is a kind of block diagram of device 800 for video clipping shown according to an exemplary embodiment.For example, dressSetting 800 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, medical treatmentEquipment, body-building equipment, personal digital assistant etc..
Referring to Fig.1 0, device 800 may include following one or more components: processing component 802, memory 804, power supplyComponent 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, andCommunication component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phaseMachine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to executeIt enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, justInteraction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitateInteraction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shownExample includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappearsBreath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their groupIt closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compileJourney read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flashDevice, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management systemSystem, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between described device 800 and user.OneIn a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screenCurtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensingsDevice is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding actionBoundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakersBody component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode orWhen video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera andRear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a MikeWind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matchedIt is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication setPart 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module canTo be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lockDetermine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commentedEstimate.For example, sensor module 814 can detecte the state that opens/closes of device 800, and the relative positioning of component, for example, it is describedComponent is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or devicePosition change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contactPresence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, atAs being used in application.In some embodiments, which can also include acceleration transducer, gyro sensorsDevice, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementationIn example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.ExampleSuch as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), numberNumber processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculatingThe memory 804 of machine program instruction, above-mentioned computer program instructions can be executed above-mentioned to complete by the processor 820 of device 800Method.
Figure 11 is a kind of block diagram of device 1900 for video clipping shown according to an exemplary embodiment.For example,Device 1900 may be provided as a server.Referring to Fig.1 1, it further comprises one that device 1900, which includes processing component 1922,A or multiple processors and memory resource represented by a memory 1932, can be by processing component 1922 for storingThe instruction of execution, such as application program.The application program stored in memory 1932 may include one or more everyOne corresponds to the module of one group of instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, and oneWired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculatingThe memory 1932 of machine program instruction, above-mentioned computer program instructions can be executed by the processing component 1922 of device 1900 to completeThe above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computerReadable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipmentEquipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storageEquipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage mediumMore specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only depositsIt is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portableCompact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereonIt is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein aboveMachine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead toIt crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wireElectric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless networkPortion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gatewayComputer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be countedCalculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipmentIn calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs,Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languagesThe source code or object code that any combination is write, the programming language include the programming language-of object-oriented such asSmalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.ComputerReadable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as oneVertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for partOr it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kindIt includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefitIt is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructionsStatus information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or canProgrammed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosureFace.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datasThe processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datasWhen the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is producedThe device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer toIt enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instructionComputer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagramThe instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or otherIn equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produceRaw computer implemented process, so that executed in computer, other programmable data processing units or other equipmentInstruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosureThe architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generationOne module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more useThe executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the boxIt can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallelRow, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/orThe combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamicThe dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, andIt is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skillMany modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purportIn the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technologyOther those of ordinary skill in domain can understand each embodiment disclosed herein.