Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a diagram of an application environment of a video generation method in one embodiment. Referring to fig. 1, the video generation method is applied to a video generation system. The video generation system includes avideo editing server 110 and avideo storage server 120. Thevideo editing server 110 and thevideo storage server 120 are connected via a network. Thevideo storage server 120 may receive a target video request sent by thevideo editing server 110, analyze the target video request, obtain start and stop time points of a target video to be generated in an original video, determine a target segment matched with the start and stop time points from video segments of the original video, generate an index file corresponding to the target video according to an index address of the target segment, and allocate a video identifier to the target video and associate the index file with the video identifier. Thevideo editing server 110 and thevideo storage server 120 may be integrated together or implemented as a server cluster composed of a plurality of servers.
In some embodiments, the video generation method provided by the present application may also be executed by a terminal, where a client supporting video generation function is installed and run on the terminal, and the client may implement the video generation method provided by the present application when running. In other embodiments, the video generation method provided by the present application may also be executed by a terminal and a server together.
In one embodiment, as shown in FIG. 2, a video generation method is provided. The embodiment is mainly illustrated by applying the method to thevideo storage server 120 in fig. 1. Referring to fig. 2, the video generation method specifically includes the following steps:
s202, receiving a target video request.
Wherein the target video request is a request for generating a target video. The target video to be generated is a video in the original video, and the original video may be an original video data, such as a movie, a tv episode or a recorded video. In one embodiment, according to the playing time of the video, the original video may be referred to as a long video, and the target video generated according to the original video may be referred to as a short video.
Specifically, the video storage server may receive a target video request sent by the video editing server. The video storage server stores a large amount of media resources, such as copyrighted video data, which includes original video and may also include target video generated from the original video. The video editing server can inquire some original video from the video storage server, and then newly add a target video request for generating a target video according to the inquired original video.
In a specific application scenario, an editor may pull a video asset from a video storage server through a video editing server. For example, an editor may input an original video name through the video editing server, and query and display an original video stored on the video storage server and related to the video name through the video editing server. Optionally, the video editing server may further query a target video generated according to the original video, and display a corresponding target video list according to the queried target video, where video information related to the target video may be presented in the target video list, and the video information includes a start-stop time point of the target video based on the original video, a video identifier of the target video, a title of the target video, and the like. After inquiring the original video related to the input original video name, an editor can designate a starting and stopping time point of a target video to be generated based on the original video through the video editing server, and submit a corresponding target video request to the video storage server according to the starting and stopping time point.
In one embodiment, besides specifying the start-stop time point of the target video to be generated in the original video, an editor may also specify the video identifier of the original video through the video editing server, so that the video storage server may locally find the original video corresponding to the video identifier. Certainly, in another application scenario, after querying an original video, an editor may enter an editing page corresponding to the original video, specify a start-stop time point in the editing page corresponding to the original video, and automatically submit a target video request to a video storage server after confirmation, so that the video storage server may detect that the target video request is submitted in the editing page corresponding to the original video, thereby automatically analyzing a video identifier corresponding to the original video on which the target video to be generated is based.
And S204, analyzing the target video request to obtain the starting and stopping time points of the target video to be generated in the original video.
Specifically, the target video request may carry the start-stop time point of the target video to be generated in the original video, and therefore, after receiving the target video request sent by the video editing server, the video storage server analyzes the target video request to obtain the start-stop time point of the target video to be generated in the original video.
The start-stop time point comprises a start time point and an end time point, the start time point is the time point when the target video in the original video starts playing, and the end time point is the time point when the target video in the original video finishes playing. It can be seen that the start and stop time points are used to determine which video segment in the original video the target video is.
For example, an editor pulls an original video from a video storage server through a video editing server, the playing time length corresponding to the original video is 02:05:20, if a target video is generated according to video data corresponding to a time interval from 01:01:25 to 01:05:30, the editor can respectively designate a starting time point of the target video to be generated in the original video to be 01:01:25 and an ending time point to be 01:05:30 through the video editing server, and generate a target video request according to the designated starting and ending time points. The video storage server can analyze the target to obtain the starting and stopping time points of the target video in the original video after receiving the target video request.
S206, determining the target fragment matched with the starting and stopping time points from the video fragments of the original video.
Where a video slice is a unit of storage for video data. In order to reduce the time consumption for starting playing of a video, a video storage server generally transcodes a part of complete video data, that is, a video is divided into small segments of video fragments, then all the divided video fragments are stored, the video storage server also writes the playing time and the index address of each divided video fragment into an index file, and can request the video fragments in sequence according to the sequence of the video fragments according to the index file when the video is played, so that the video can be played immediately, and the playing of the whole video is realized. The video clips may be video Stream files in a TS (Transport Stream) format, and the index file may be, for example, an M3U8 file.
Fig. 3 is a timing diagram illustrating the playing of video by a video player in one embodiment. Referring to fig. 3, a player on the terminal may obtain a video playing request, request an index file of a video from the scheduling server based on the video playing request, and after the scheduling server returns the index file, the video player may request video fragments from the file server according to an index address corresponding to each video fragment recorded in the index file, and sequentially play the video according to the video fragments returned by the file server, thereby playing the entire video.
In the embodiment provided by the application, a player on a terminal can acquire a video playing event triggered in a video cover, respond to the video playing event, acquire an index file corresponding to a target video linked by the video cover from a video storage server, acquire an index address corresponding to a target fragment of the target video according to the index file, request the target fragment from the video storage server according to the index address, and play the target video according to the target fragment.
The video storage server stores video fragments of an original video, the video storage server can acquire the video fragments forming the original video after determining the original video used for generating a target video, and the target fragments used for generating the target video are determined from the video fragments of the original video according to the playing time corresponding to each video fragment and the starting and stopping time points obtained by analyzing from the target video request. As can be seen, the target segment is a video segment determined from the video segments of the original video according to the start-stop time points.
The number of target slices may be 1 or more, and the number of target slices is also related to the starting and ending time points of the target video in the original video. Generally, the playing time of one video fragment is 5 to 10 seconds, and if the playing time corresponding to the starting and ending time point is far longer than the playing time of one video fragment, a plurality of target fragments for generating the target video can be determined; if the playing time corresponding to the start-stop time point is less than the playing time of one video fragment, the number of the determined target fragments for generating the target video may be only 1 to 2.
As shown in fig. 4, in one embodiment, the step of determining a target segment matching the start-stop time point from the video segments of the original video includes:
s402, obtaining an original index file corresponding to the original video.
Specifically, the video storage server may obtain an original index file corresponding to the original video according to the video identifier of the original video, where the original index file is a directory file of the original video, and the original index file includes an index address and a play duration of a video fragment of the original video.
S404, analyzing the original index file to obtain the playing time length corresponding to the video fragment of the original video.
The playing duration is the duration of the video fragment during playing, and the playing duration corresponding to the video fragment of the original video is also recorded in the original index file. And the video storage server analyzes the original index file of the original video to obtain the playing time length corresponding to the video fragment forming the original video.
And S406, sequentially splicing the playing time lengths corresponding to the video fragments according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video.
The playing time period is the time period range of each video fragment played in the original video. For example, if the playing time duration corresponding to the first video segment of the original video is 5s, and the playing time duration corresponding to the second video segment is 3 s, the playing time periods of the first video segment and the second video segment in the original video are 0 s to 5s, and 5s to 8 s in sequence. And if the playing time length corresponding to the first video fragment is 10 seconds, the playing time period of the first video fragment in the original video is 0-10 seconds.
In the original index file, in order to facilitate the realization of playing the whole video by sequentially requesting the video fragments according to the playing sequence, the playing time duration corresponding to the video fragments of the original video and the corresponding index address are sequentially recorded according to the playing sequence of the video fragments, so that the video storage server can sequentially splice the playing time duration corresponding to the video fragments according to the playing sequence of the video fragments, thereby obtaining the playing time period of each video fragment in the original video.
And S408, determining a target segment according to the video segment corresponding to the playing time period containing the start-stop time point.
Specifically, after determining the playing time period of each video segment of the original video in the original video, the video storage server may determine a target segment for generating the target video according to the video segment corresponding to the playing time period including the start-stop time point.
In this embodiment, the target segment for generating the target video is directly found out from the video segments of the original video at the starting and ending time points, and the target video can be generated based on the target segment.
And S208, generating an index file corresponding to the target video according to the index address of the target fragment.
Specifically, the video storage server may obtain an original index file corresponding to an original video, and analyze the original index file to obtain index addresses corresponding to video fragments of the original video, so that after a target fragment for generating a target video is determined from the video fragments of the original video, the video storage server may obtain the index address corresponding to each target fragment, and then may directly generate the index file corresponding to the target video according to the index address of the target fragment; or, in some cases, the target segment may be downloaded according to the index address of the target segment, and then the cut segment is stored after the target segment is partially cut, and the index file corresponding to the target video is generated according to the storage address.
S210, distributing video identification for the target video, and associating the index file with the video identification.
The index file corresponding to the target video is used for sequentially requesting the video fragments according to the download addresses and the playing time of the video fragments forming the target video recorded in the index file when the target video is played, so that the whole target video is played. Therefore, after the index file corresponding to the target video is obtained, the video storage server can allocate a unique video identifier for the target video, and associate the generated index file with the allocated video identifier, so that the video player can sequentially download video fragments according to the index address recorded in the index file corresponding to the video identifier and play the video.
In one embodiment, the video storage server may further associate the video identifier of the target video with the video identifier of the original video, so that when the video query request for the original video sent by the video editing server is obtained, video information of the target video, which is associated with the original video and generated based on the original video, may be obtained, and the video information of the original video and the video information of the target video are returned to the video editing server together. The video information includes a video name, a video duration, a video cover, a start-stop time point of the target video in the original video, and the like.
The video generation method directly determines the target fragment of the target video to be generated from the video fragments of the original video, generates the index file corresponding to the target video according to the index address of the target fragment, associates the index file with the video identifier allocated to the target video, and can directly play the target video according to the index address of the target fragment when the video needs to be played.
In one embodiment, the start-stop time points include a start time point and an end time point; determining a target fragment according to a video fragment corresponding to a playing time period containing a start-stop time point, comprising: taking a video fragment corresponding to a playing time period containing an initial time point as an initial target fragment; and taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
Wherein the start-stop time point includes a start time point and an end time point. After determining the playing time period of the video segment of the original video in the original video, the video storage server may use the video segment corresponding to the playing time period including the starting time point as a starting target segment for generating the target video, and use the video segment corresponding to the playing time period including the ending time point as an ending target segment for generating the target video.
Fig. 5 is a schematic diagram illustrating a playing time period corresponding to a video segment of an original video in an embodiment. Referring to fig. 5, the original video includes n video clips, where the playing time period corresponding to the 0 th video clip is 0 to t1, the playing time period corresponding to the 1 st video clip is t1 to t2, the playing time period corresponding to the 2 nd video clip is t2 to t3, and so on, the playing time period corresponding to the n-1 th video clip is t (n-1) to t (n), and the playing time period corresponding to the n th video clip is t (n) to t (n + 1). The starting and ending time points parsed from the target video request are m to n, where m is e (0, t1), and n is e (t2, t3), then the 0 th video slice of the original video is the starting target slice for generating the target video, and the 2 nd video slice is the ending target slice for generating the target video.
In one embodiment, generating an index file corresponding to a target video according to an index address of a target fragment includes: analyzing the original index file to obtain an index address corresponding to the video fragment of the original video; when the starting time point is in the playing time period corresponding to the starting target fragment, acquiring the starting target fragment according to the index address corresponding to the starting target fragment; segmenting the video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video; storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
Specifically, the video storage server further needs to parse an original index file corresponding to the original video to obtain an index address corresponding to each video fragment of the original video. In this embodiment, when the start time point is in the playing time period corresponding to the start target segment, it indicates that the entire start target segment cannot be used as a part of the target video, but a part of the video needs to be cut from the start target segment from the start time point, and the cut video is used as the start segment of the target video.
For example, referring to fig. 5, if the start time point m is between (0, t1), the video storage server needs to cut out video data corresponding to the time interval of (m, t1) from the start target segment, use the cut-out video data as the start segment corresponding to the target video, store the start segment separately, and generate an index file corresponding to the target video according to the storage address of the start segment.
In fact, in order to facilitate other devices or the video storage server itself to obtain the generated target video, the video storage server needs to generate a corresponding index file for the target video. And generating an index file corresponding to the target video according to the storage address of the starting fragment, namely recording the storage address of the starting fragment in the index file corresponding to the target video so as to obtain the starting fragment according to the storage address recorded in the index file.
In one embodiment, the method further comprises: when the starting time point is the starting point of the playing time period corresponding to the starting target fragment, taking the starting target fragment as the starting fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the initial fragment.
In this embodiment, when the starting time point is just the starting point of the playing time period corresponding to the starting target segment, it is described that the whole starting target segment may be used as a part of the constituting target video, that is, the starting target segment is directly used as the starting segment corresponding to the target video, so as to implement multiplexing of the video segments of the original video, and save the storage space.
For example, referring to fig. 5, if the starting time point m is just t1 and the ending time point n ∈ (t2, t3), the video storage server needs to directly use the 1 st video fragment of the original video as the starting fragment corresponding to the target video, and generate the index file corresponding to the target video according to the index address of the starting fragment.
In one embodiment, generating an index file corresponding to a target video according to an index address of a target fragment includes: analyzing the original index file to obtain an index address corresponding to the video fragment of the original video; when the ending time point is in the playing time period corresponding to the ending target fragment, acquiring the ending target fragment according to the index address corresponding to the ending target fragment; segmenting the video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video; and storing the finished fragments, and generating an index file corresponding to the target video according to the storage address of the finished fragments.
In this embodiment, when the ending time point is in the playing time period corresponding to the ending target segment, it indicates that the entire ending target segment cannot be taken as a part of the target video, but a part of the video needs to be cut out from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment, and the cut-out video is taken as the ending segment of the target video.
For example, referring to fig. 5, if the ending time point n is between (t2, t3), the video storage server needs to divide the video data corresponding to the time interval of (t2, n) from the ending target segment, store the divided video data as the ending segment corresponding to the target video, store the ending segment separately, and generate the index file corresponding to the target video according to the storage address of the ending segment.
Similarly, in order to facilitate other devices or the video storage server itself to obtain the generated target video, the video storage server needs to generate a corresponding index file for the target video. And generating an index file corresponding to the target video according to the storage address of the ending fragment, namely recording the storage address of the ending fragment in the index file corresponding to the target video so as to obtain the ending fragment according to the storage address recorded in the index file.
In one embodiment, the method further comprises: when the ending time point is the ending point of the playing time period corresponding to the ending target fragment, taking the ending target fragment as the ending fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the ending fragment.
In this embodiment, when the ending time point is exactly the ending point of the playing time period corresponding to the ending target segment, it is described that the entire ending target segment may be used as a part of the constituting target video, that is, the ending target segment is directly used as the ending segment corresponding to the target video, so as to implement multiplexing of the video segments of the original video, and save the storage space.
For example, referring to fig. 5, if the ending time point n is exactly t3, the video storage server needs to directly use the 2 nd video segment of the original video as the ending segment corresponding to the target video, and generate the index file corresponding to the target video according to the index address of the ending segment.
In one embodiment, the method further comprises: taking a video fragment between the starting target fragment and the ending target fragment as an intermediate target fragment; generating an index file corresponding to the target video according to the index address of the target fragment, wherein the index file comprises: analyzing the original index file to obtain an index address corresponding to the video fragment of the original video; directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the middle fragment.
Specifically, after determining the starting target segment and the ending target segment from the video segments of the original video, the video storage server may further use the video segment between the starting target segment and the ending target segment as an intermediate target segment, and the whole intermediate target segment may be used as a part constituting the target video, that is, the intermediate target segment is directly used as an intermediate segment corresponding to the target video, so that multiplexing of the video segments of the original video is realized, and the storage space can be saved. Therefore, in this case, the video storage server may directly record the index address of the middle fragment obtained by parsing the original index file in the index file corresponding to the target video, so that when the target video is played, the video storage server directly requests the middle fragment according to the index address of the middle fragment recorded in the corresponding index file and plays the middle fragment.
Of course, when the determined starting target segment and ending target segment are adjacent video segments of the original video, no other video segment exists between the starting target segment and the ending target segment, and in this case, the target video is generated according to the starting target segment and the ending target segment.
Fig. 6 is a schematic diagram of a video slice of a target video generated according to an original video in an embodiment. Referring to fig. 6, if the start time point m e (0, t1) and n e (t2, t3) are both within the range of m e (0, t1) and n e (t2, t3), the start fragment of the target video needs to be regenerated according to the 0 th video fragment of the original video, the 1 st video fragment is directly multiplexed as the intermediate fragment, and the end fragment of the target video needs to be regenerated according to the 2 nd video fragment. The index file corresponding to the target video needs to store the storage addresses for finding the start fragment and the end fragment and also needs to store the index address for finding the 1 st video fragment.
In one embodiment, the method further comprises: acquiring a starting fragment of a target video; capturing a screenshot corresponding to a starting time point in starting and stopping time points from a playing picture corresponding to the starting fragment; and taking the screenshot as a video cover of the target video, and associating the video cover with the video identification of the target video.
In this embodiment, the video storage server may further store a video cover of the generated target video, so that the generated target video and the video cover are linked and then displayed. Specifically, the video storage server may obtain a starting segment of the target video, and intercept a playing picture from a playing picture corresponding to the starting segment as a video cover, where the playing picture may be a screenshot corresponding to a starting time point. Further, the video storage server may capture a video cover as the target video and associate the video cover with the video identification of the target video.
In one embodiment, the method further comprises: acquiring a video query request sent by a terminal; and responding to the video query request, returning the video cover of the target video to the terminal so that the terminal displays the video cover of the target video in the video playing entry page.
The terminal may be a user terminal equipped with a video player, and the video storage server may receive a video query request sent by the terminal, extract a video keyword from the video query request, and return an original video related to the video keyword to the terminal. Optionally, the video storage server may also return a video cover page corresponding to the target video related to the original video. Therefore, the terminal can display the original video and the video cover corresponding to the target video related to the original video through the video player.
Of course, the video storage server may also independently push the video covers corresponding to the generated target videos to the terminal, so that the terminal may display the video covers corresponding to the target videos in the video playing entry page, when the user clicks the video cover corresponding to the target video, the terminal may obtain an index file corresponding to the target video linked by the video cover, obtain an index address corresponding to a target segment of the target video according to the index file, request the target segment according to the index address, and play the target video according to the target segment.
Fig. 7 is a schematic interface diagram illustrating a video cover corresponding to a target video in one embodiment. Referring to fig. 7, in a videoplayback entry page 700, a plurality of video covers 702 are displayed, each video cover linked to a corresponding target video, and avideo title 704 is also displayed in the video cover. When a user clicks a video cover corresponding to a certain target video, the target video can be played.
In one embodiment, the video storage server may obtain the video query request sent by the video editing server, and return a video cover corresponding to the original video and a video cover corresponding to the target video related to the original video according to the video query request. Optionally, information such as start-stop time points, video titles and the like related to the target video may also be returned, so that the video editing server may present video list information so as to generate target videos corresponding to other start-stop time points according to the current original video.
As shown in fig. 8, a schematic flowchart of a video generation method in a specific embodiment is shown, where the video generation method is executed by a video storage server, and specifically includes the following steps:
s802, receiving a target video request.
S804, the target video request is analyzed, and the starting time point and the ending time point of the target video to be generated in the original video are obtained.
S806, an original index file corresponding to the original video is obtained.
And S808, analyzing the original index file to obtain the playing time length and the index address corresponding to the video fragment of the original video.
And S810, splicing the playing time lengths corresponding to the video fragments according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video.
S812, the video segment corresponding to the playing time period including the starting time point is used as the starting target segment.
S814, judging whether the starting time point is the starting point of the playing time period corresponding to the starting target fragment; if not, executing step S818; if yes, go to step S816.
S816, acquiring the starting target fragment according to the index address corresponding to the starting target fragment; segmenting the video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video; storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
S818, taking the starting target fragment as a starting fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the initial fragment.
And S820, taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
S822, determining whether the ending time point is the ending point of the playing time period corresponding to the ending target segment, if yes, executing step S824; if not, go to step S826.
S826, acquiring the ending target fragment according to the index address corresponding to the ending target fragment; segmenting the video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video; and storing the finished fragments, and generating an index file corresponding to the target video according to the storage address of the finished fragments.
S824, taking the ending target fragment as the ending fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the ending fragment.
S828, taking the video fragment between the starting target fragment and the ending target fragment as an intermediate target fragment;
s830, directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the middle fragment.
And S832, distributing a video identifier for the target video, and associating the index file with the video identifier.
And S834, capturing a screenshot corresponding to the starting time point in the starting and stopping time points from the playing picture corresponding to the starting fragment.
And S836, taking the screenshot as a video cover of the target video, and associating the video cover with the video identifier of the target video.
And S838, acquiring the video query request sent by the terminal.
S840, responding to the video query request, returning the video cover of the target video to the terminal, so that the terminal displays the video cover of the target video in the video playing entry page.
FIG. 8 is a flowchart illustrating a video generation method according to an embodiment. It should be understood that, although the steps in the flowchart of fig. 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 8 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Fig. 9 is a diagram of an application environment of a video generation method in another embodiment. Referring to fig. 9, the video generating method is applied to a video processing system. The video processing system includes avideo editing server 910, avideo storage server 920, avideo application server 930, and a terminal 940.
Thevideo editing server 910 and thevideo storage server 920 are connected via a network. Thevideo editing server 910 may be configured to send a target video request to thevideo storage server 920, where the target video request carries start-stop time points of a target video to be generated in an original video, and thevideo storage server 120 may be configured to receive and analyze the target video request to obtain the start-stop time points of the target video to be generated in the original video; determining a target fragment matched with the start-stop time point from the video fragments of the original video, and generating an index file corresponding to the target video according to the index address of the target fragment; and distributing a video identifier for the target video, and associating the index file with the video identifier. Thevideo storage server 920 may also be used to return a video identification to thevideo editing server 910.
Thevideo editing server 910 is connected to thevideo application server 930 via a network, and thevideo editing server 910 can be configured to send the video identifier to thevideo application server 930.
Thevideo application server 930 is connected to the terminal 940 via a network, thevideo application server 930 may be configured to receive a video query request sent by the terminal 940, respond to the video query request, and return a video identifier corresponding to the target video to the terminal 940, and the terminal 940 may be configured to obtain a corresponding index file according to the video identifier and analyze the index file to obtain an index address corresponding to the target segment of the target video; and requesting the target fragment according to the index address, and playing the target video according to the target fragment.
Thevideo editing server 910, thevideo storage server 920 and thevideo application server 930 may be integrated together or implemented as a server cluster composed of a plurality of servers. The terminal 940 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like.
In one embodiment, as shown in fig. 10, a video playback method is provided. This embodiment is mainly illustrated by applying this method to the terminal 940 in fig. 9. Referring to fig. 10, the video playing method specifically includes the following steps:
s1002, entering a video playing entry page.
The video playing entry page can be used for accommodating a plurality of video covers, and the terminal can increase the video playing entry page and display a large number of video covers in the video playing entry page. Each video cover is linked to corresponding video data. In a specific application scenario, a user may start a video player installed on a terminal, and enter a video playing entry page provided by the video player.
S1004, a video cover is displayed in the video playback entry page.
S1006, a video play event is triggered in the video cover.
The video playing event is an event for playing the video triggered in the video cover. The video playing event can be a pressing operation, a clicking operation or a sliding operation for a video cover triggered by a user. The video cover is an inlet for playing the video connected with the video cover, and a user can enter the playing page of the video linked with the video cover by triggering a video playing event and play the video in the playing page.
S1008, responding to a video playing event, acquiring an index file corresponding to a target video linked by a video cover, and acquiring an index address corresponding to a target fragment of the target video according to the index file; the target video slice is a video slice which is determined from video slices of the original video and is matched with the starting and stopping time point, and the starting and stopping time point is the time point of the starting and stopping position of the target video in the original video.
And generating the target video based on the target fragment determined from the video fragments of the original video at the starting and ending time points. Specifically, the video generation method can be obtained according to the video generation method provided in the foregoing embodiment, and a description thereof is not repeated here. The index addresses of all target fragments of the target video are recorded in the index file of the target video, and the terminal can analyze the index addresses to obtain the index addresses corresponding to all the target fragments of the target video according to the index file corresponding to the target video connected with the video cover after detecting a video playing event.
And S1010, requesting the target fragment according to the index address and playing the target video according to the target fragment.
Specifically, the terminal may sequentially obtain the target segments according to the index addresses of the target segments recorded in the index file and then decode the target segments, thereby realizing complete playing of the entire target video.
The video playing method directly determines the target fragment of the target video to be generated from the video fragments of the original video, generates the index file corresponding to the target video according to the index address of the target fragment, can directly play the target video according to the index address of the target fragment when the video needs to be played, is actually a plurality of video fragments multiplexed by the original video and the target video, and compared with a mode of cutting a section of video from the original video, manually cutting the video to obtain the target fragment and then additionally storing the target fragment, the video playing method not only can greatly reduce the storage space, but also does not need to manually cut and segment the video.
In one embodiment, as shown in fig. 11, there is provided a video generating apparatus 1100 comprising areceiving module 1102, aparsing module 1104, an objectslicing determination module 1106, an indexfile generating module 1108, and astorage module 1110, wherein:
areceiving module 1102, configured to receive a target video request;
theanalysis module 1104 is used for analyzing the target video request to obtain the starting and ending time points of the target video to be generated in the original video;
a targetsegment determining module 1106, configured to determine a target segment matching the start-stop time point from video segments of the original video;
an indexfile generating module 1108, configured to generate an index file corresponding to the target video according to the index address of the target segment;
thestorage module 1110 is configured to allocate a video identifier to a target video, and associate the index file with the video identifier.
In one embodiment, the targetsegment determining module 1106 is further configured to obtain an original index file corresponding to an original video; analyzing the original index file to obtain the playing time length corresponding to the video fragment of the original video; splicing the playing time lengths corresponding to the video fragments in sequence according to the playing sequence of the video fragments of the original video to obtain the playing time period of each video fragment in the original video; and determining a target fragment according to the video fragment corresponding to the playing time period containing the start-stop time point.
In one embodiment, the start-stop time points include a start time point and an end time point; the targetsegment determining module 1106 is further configured to use a video segment corresponding to the playing time period including the starting time point as a starting target segment; and taking the video segment corresponding to the playing time period containing the ending time point as an ending target segment.
In an embodiment, the indexfile generating module 1108 is further configured to parse the original index file to obtain an index address corresponding to the video fragment of the original video; when the starting time point is in the playing time period corresponding to the starting target fragment, acquiring the starting target fragment according to the index address corresponding to the starting target fragment; segmenting the video from the starting target segment according to the starting time point and the end point of the playing time period corresponding to the starting target segment to obtain a starting segment corresponding to the target video; storing the initial fragment, and generating an index file corresponding to the target video according to the storage address of the initial fragment.
In an embodiment, the indexfile generating module 1108 is further configured to, when the starting time point is a starting point of a playing time period corresponding to the starting target segment, take the starting target segment as a starting segment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the initial fragment.
In an embodiment, the indexfile generating module 1108 is further configured to parse the original index file to obtain an index address corresponding to the video fragment of the original video; when the ending time point is in the playing time period corresponding to the ending target fragment, acquiring the ending target fragment according to the index address corresponding to the ending target fragment; segmenting the video from the ending target segment according to the starting point and the ending time point of the playing time period corresponding to the ending target segment to obtain an ending segment corresponding to the target video; and storing the finished fragments, and generating an index file corresponding to the target video according to the storage address of the finished fragments.
In an embodiment, the indexfile generating module 1108 is further configured to, when the ending time point is an ending point of a playing time period corresponding to the ending target segment, take the ending target segment as an ending segment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the ending fragment.
In one embodiment, the targetsegment determining module 1106 is further configured to take a video segment between the starting target segment and the ending target segment as an intermediate target segment; the indexfile generating module 1108 is further configured to parse the original index file to obtain an index address corresponding to the video fragment of the original video; directly taking the intermediate target fragment as an intermediate fragment corresponding to the target video; and generating an index file corresponding to the target video according to the index address of the middle fragment.
In one embodiment, the video generating apparatus 1100 further comprises a video cover generation module for obtaining a start fragment of the target video; capturing a screenshot corresponding to a starting time point in starting and stopping time points from a playing picture corresponding to the starting fragment; and taking the screenshot as a video cover of the target video, and associating the video cover with the video identification of the target video.
In one embodiment, the video generating apparatus 1100 further comprises a video query request obtaining module, configured to obtain a video query request; and responding to the video query request, returning the video cover of the target video so that the terminal displays the video cover of the target video in the video playing entry page.
The video generating device 1100 directly determines the target segment of the target video to be generated from the video segments of the original video, generates the index file corresponding to the target video according to the index address of the target segment, and associates the index file with the video identifier allocated to the target video, so that the target video can be directly played according to the index address of the target segment when the video needs to be played.
In one embodiment, as shown in fig. 12, a video playback apparatus 1200 is provided, which includes apresentation module 1202, anacquisition module 1204, an indexaddress acquisition module 1206, and aplayback module 1208, wherein:
adisplay module 1202, configured to enter a video play entry page; displaying a video cover in a video playing entry page;
an obtainingmodule 1204, configured to trigger a video playing event in a video cover;
the indexaddress obtaining module 1206 is configured to obtain, in response to a video playing event, an index file corresponding to a target video linked to a video cover, and obtain, according to the index file, an index address corresponding to a target segment of the target video; the target fragment is determined according to a video fragment matched with a start-stop time point in video fragments of an original video, and the start-stop time point is a time point of the start-stop position of the target video in the original video;
theplaying module 1208 is configured to request the target segment according to the index address and play the target video according to the target segment.
The video playing apparatus 1200 directly determines the target segment of the target video to be generated from the video segments of the original video, and generates the index file corresponding to the target video according to the index address of the target segment, so that the target video can be directly played according to the index address of the target segment when the video needs to be played.
FIG. 13 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be thevideo storage server 120 in fig. 1 or thevideo storage server 920 in fig. 9. As shown in fig. 13, the computer device includes a processor, a memory, a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video generation method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a video generation method.
Fig. 14 is a diagram showing an internal structure of a computer device in another embodiment. The computer device may specifically be the terminal 940 in fig. 9. As shown in fig. 14, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video playback method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a video playback method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the configurations shown in fig. 13 and 14 are block diagrams of only some configurations relevant to the present disclosure, and do not constitute a limitation on the computing devices to which the present disclosure may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the video generation apparatus 1100 provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 13. The memory of the computer device may store various program modules constituting the video generating apparatus, such as areceiving module 1102, aparsing module 1104, an objectsegmentation determining module 1106, an indexfile generating module 1108 and astorage module 1110 shown in fig. 11. The computer program constituted by the respective program modules causes the processor to execute the steps in the video generation method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 13 may perform step S202 by thereceiving module 1102 in the video generating apparatus shown in fig. 11. The computer device may perform step S204 through theparsing module 1104. The computer device may perform step S206 by the targetshard determination module 1106. The computer device may perform step S208 by the indexfile generation module 1108. The computer device may perform step S210 through thestorage module 1110.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video generation method described above. Here, the steps of the video generation method may be steps in the video generation methods of the above-described respective embodiments.
In one embodiment, a computer-readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of the video generation method described above. Here, the steps of the video generation method may be steps in the video generation methods of the above-described respective embodiments.
In one embodiment, the video playback apparatus 1200 provided in the present application may be implemented in the form of a computer program, which is executable on a computer device as shown in fig. 14. The memory of the computer device may store various program modules constituting the video playback apparatus, such as apresentation module 1202, anacquisition module 1204, an indexaddress acquisition module 1206, and aplayback module 1208 shown in fig. 12. The computer program constituted by the respective program modules causes the processor to execute the steps in the video playback method of the embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 14 may perform steps S1002 and S1004 by thepresentation module 1202 in the video playback apparatus shown in fig. 12. The computer device may perform step S1006 through theacquisition module 1204. The computer device may perform step S1008 by the indexaddress acquisition module 1206. The computer device may perform step S1010 through theplay module 1208.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video playback method described above. Here, the steps of the video playing method may be steps in the video playing methods of the above embodiments.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, which, when executed by a processor, causes the processor to perform the steps of the above-described video playback method. Here, the steps of the video playing method may be steps in the video playing methods of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.