Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
It should be noted that, for some learning-type videos/web lessons, currently, a user does not have a convenient method for creating, recording and synchronously generating notes with videos, and only can use recording software of a third party to learn recording, when the user wants to play back and review, the user can only drag a video progress bar to repeatedly watch corresponding learning segments, and the user can record the notes in the recording software of the third party by himself, so that the playing time of the videos cannot be recorded conveniently, the video skipping cannot be performed quickly, and a linkage function between the notes and the videos is lacked.
The embodiment of the application provides a video note generation method, which can provide a note recording method combined with a video, so that a user watching the video can watch the video and record key time marks, related note contents and the like based on a video playing interface directly. In addition, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the video watching experience is improved.
According to the embodiment of the application, the creation of many-to-many association relation between played videos and video notes is realized, a user is supported to create multiple notes under one video, note contents and time marks of multiple videos can be recorded in the same video note, and great flexibility is provided.
The embodiment of the application also provides a data structure for representing the association relationship, the data structure can support a note creator to add a time mark in a self-defined mode, and the time mark can support a note viewer to use a time mark function to quickly position the marked time. Therefore, the method and the device can support a note creator to custom generate, create, modify and delete the related content of the note, so that the related content can be stored and shared more conveniently without secondary modification and editing of the video.
In the present application, a video note generating method is provided, and the present application relates to a video note generating apparatus, a computing device, and a computer readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 shows a flowchart of a video note generation method according to an embodiment of the present application, which specifically includes the following steps:
step 102: and displaying at least one video note on a playing interface of the target video.
Specifically, the target video may be a currently played video, the target video may be any type of video, such as a learning video, an entertainment video, and the like, the video note may be a text recording related content of the video, and the video note may record multimedia information such as characters and images (the images may be obtained from a screenshot of the target video or uploaded by the user, and may be a video time point corresponding to the screenshot when the user views the target video) in the process of watching the target video. In addition, the video note may include at least one of a video note created by the user who is currently logged in and a video note created by other users. In practical application, a part of area can be marked off in the playing interface of the target video, and at least one acquired video note is displayed in the area.
In an optional implementation manner of this embodiment, the corresponding video note may be obtained and displayed according to the user identifier of the login user, so that at least one video note is displayed on the play interface of the target video, and the specific implementation process may be as follows:
acquiring at least one first video note created by a login user according to a user identifier of the login user, wherein the at least one first video note comprises video notes associated and/or not associated with the target video;
and displaying the at least one first video note on a playing interface of the target video.
Specifically, the user identifier may refer to a symbol, a number, or a word that can uniquely identify a user, for example, the user identifier may be an account ID (identity) of the user; the first video note is a video note created by the logged-in user.
In addition, the video note associated with the target video may mean that the note related to the target video has been recorded, that is, at least one note entry exists in each note entry recorded by the video note associated with the target video, which is created based on the target video; the video note not associated with the target video may mean that the note related to the target video has not been recorded, i.e., each note entry recorded by the video note not associated with the target video is not related to the target video and is created based on other videos.
In practical application, each video note can carry the user identifier of the note creation user when being created, so that in the playing process of the target video, after a certain user logs in, each video note corresponding to the user identifier of the logged-in user can be obtained from a video note library of a video playing platform, and the obtained video notes are displayed in a playing interface of the target video. The video note library is a database for storing all video notes corresponding to the video playing platform.
In addition, since the playing interface of the target video may display a plurality of video notes, when the plurality of video notes are displayed, the detailed content of each video note may not be displayed, but only the note identifier of each video note is displayed, so that each video note in the obtained at least one video note may include the note identifier, and at this time, when the playing interface of the target video displays at least one first video note, the note identifier of the at least one first video note may be displayed. The note identifier may be a preset identifier for indicating a corresponding video note, and for example, the note identifier includes, but is not limited to, a note title, a note number, and the like.
For example, assume that user A has created 5 video notes in total, video note 1 was created for video A, video note 2 was created for video B, video note 3 was created for video C, video note 4 was created for video D, and video note 5 was created for video E. Assuming that the user a logs in during the process of watching the video C, at this time, the 5 video notes created by the user a may be acquired from the video note library according to the user identifier of the user a, and displayed in the playing interface of the video C, as shown in fig. 2.
In the playing process of any video, all video notes created by a login user can be acquired, and the acquired video notes can comprise video notes associated with the played video and can also comprise video notes not associated with the played video; that is, each video note created by the login user can be displayed in the playing process of a certain video, so that the user can conveniently add, modify, delete and the like any video note subsequently, and notes related to the currently played video can be recorded flexibly and efficiently.
In an optional implementation manner of this embodiment, in addition to obtaining each video note created by the logged-in user and displaying the video note on the video playing interface of the target video, the method may also obtain video notes created by other users and also display the video notes in the video playing interface of the target video, that is, after obtaining at least one first video note created by the logged-in user according to the user identifier of the logged-in user, further including:
acquiring a corresponding second video note according to the video identifier and the video type of the target video, recording the second video note as user creation different from the login user, and recording the second video note to comprise video notes associated with and/or not associated with the target video;
correspondingly, displaying a note title of the at least one first video note on a playing interface of the target video, including:
displaying note titles of the at least one first video note and the second video note on a playing interface of the target video.
Specifically, the video identifier may refer to a symbol, a number, or a text that can uniquely identify a video, for example, the video identifier may be a video ID; the video type may refer to a type to which the video belongs, such as education, business, entertainment, and the like, and certainly, in practical applications, the video may also be a more subdivided type, such as a mathematical examination education type video, an english examination education type video, and the like, which is not limited in this application. In addition, the second video note refers to a video note created by a user other than the logged-in user, and thus the second video note may also include video notes associated and/or unassociated with the target video.
In practical application, when the second video notes created by other users except the login user are obtained, a large number of video notes may exist and cannot be displayed completely, so that the second video notes associated with the target video can be obtained based on the video identifier of the target video, that is, the second video notes associated with the target video are obtained from a large number of video notes. In addition, in the process of watching the video, the user can jump to other videos of the same type based on the video notes created by other users, and each video note can also carry the video type, so that the video note corresponding to the video of the same type as the target video can be screened out from each second video note created by other users according to the video type of the target video.
It should be noted that the first video note created by the login user is a record of the corresponding video, so that the login user can perform editing operations such as addition, deletion, modification and the like on the first video note created by the login user; and the second video notes created by other users are records of the corresponding videos by other users, so that the login user can only check and click without obtaining editing authorization, and cannot execute editing operations such as addition, deletion, modification and the like, namely the second video notes are only used for displaying included note items and adjusting the playing progress.
In specific implementation, note titles of the first video note and the second video note can be displayed on a playing interface of the target video, and the note titles of the first video note and the second video note can be displayed in different areas.
In addition, because the number of the acquired first video notes and the acquired second video notes may be huge, a user cannot quickly find a required video note from the displayed first video notes or the displayed second video notes, the video note display area may further include a note search box, and the user may input a keyword or an identifier of the required video note in the note search box to search, so that the required video note is quickly found, and subsequent viewing or editing operations are performed.
Along the above example, the video notes 1-5 are video notes created by the user a, and in addition, according to the video identifier of the video C, the video note 6 and the video note 7 associated with the video C created by the user B are obtained, and according to the video type "education" of the video C, the video note 8 created by the user B for the video F which is the video of the same education and the video note 9 created by the user C for the video G which is the video of the same education are obtained. At this time, the video notes 1-5 are displayed in the self-created note area and the video notes 6-9 are displayed in the non-self-created note area, as shown in FIG. 3.
According to the embodiment of the application, not only can the first video note created by the login user, but also the second video note created by other users can be obtained, and the first video note and the second video note are displayed in the playing interface of the target video at the same time, so that the user can conveniently check and edit the video note created by the user, the user can conveniently check the note content created by other users, and the user can conveniently and flexibly jump to the corresponding playing time in the corresponding video.
Step 104: and in the case that the first target video note is recorded as a video note which is not associated with the target video, responding to the association operation of the first target video note, and adding the video identification of the target video in the association relationship between the note identification and the video identification of the first target video note.
Specifically, the first target video note may be any one of video notes created by the login user, that is, the first target video note may be any one of the first video notes. It should be noted that, after the user selects the first target video note, if the selected first target video note is not associated with the target video, the user may associate the first target video note with the target video through a preset association operation.
The association operation may be an operation of clicking an association control, or an operation of adding a note content related to the target video in the first target video note, that is, the first target video note may be associated with the target video by adding a note content related to the target video in the first target video note, or the first target video note may be associated with the target video directly through a preset association control without adding a note content related to the target video in the first target video note, which is not limited in the present application.
In practical application, when the association operation for the first target video note is detected, it is described that the user wants to associate the first target video note with the target video, and at this time, the video identifier of the target video may be added to the association relationship between the note identifier and the video identifier of the first target video note, so as to associate the first target video note with the target video.
In an optional implementation manner of this embodiment, associating the first target video note with the target video by adding the note content related to the target video to the first target video note, that is, in response to the association operation for the first target video note, before adding the video identifier of the target video in the association relationship between the note identifier and the video identifier of the first target video note, the method further includes:
in the case that an adding operation aiming at a first target video note is detected, generating a note entry corresponding to the adding operation in the first target video note, or adding note content of the target video in the note entry corresponding to the adding operation;
determining that the association operation is detected.
When the first target video note is associated with the target video by adding the note content related to the target video to the first target video note, the association operation for the first target video note is the addition operation. The adding operation refers to an operation of adding a time node and note content related to a target video in a first target video note after a login user selects the first target video note from at least one displayed video note, for example, the adding operation may be an operation triggered by inserting a time stamp control in an editing area after entering the first target video note.
In practical application, a user can randomly select one video note from at least one displayed video note, namely a first target video note, in the process of watching a target video; after the first target video note is selected, the playing interface of the target video may display detailed information of the first target video note, where the detailed information may include an editing control, each note entry, and the like, each note entry may include a timestamp and note data, and the editing control may include an adding control, a modifying control, a deleting control, and the like. Then, a login user can generate a new note entry in the first target video note by triggering the adding control, and the related note content of the target video is added into the newly generated note entry; or, the login user can select a note entry in the first target video note, and the note entry is modified through the modification control, so that the related note content of the target video is added into the note entry.
One video note in the application can comprise one or more note entries, and the plurality of note entries can correspond to different videos. The user can generate a new note entry in the selected first target video note, the related note content of the target video is added in the new note entry, an existing note entry in the first target video note can be edited, the related note content of the target video is added in the note entry, the related note content of the target video is added in the selected video note, the operation mode of associating the first target video note and the target video is flexible and diverse, the requirements under different application scenes can be met, and the adaptability and the flexibility are high.
In an optional implementation manner of this embodiment, a special format of the video note may be set, so that the video note may record content such as video information and note information, that is, a note entry corresponding to the adding operation is generated in the first target video note, and a specific implementation process may be as follows:
determining a first video marking progress corresponding to the adding operation in the target video, acquiring a video title and a video identifier of the target video, and taking the first video marking progress, the video title and the video identifier as time marks;
acquiring note content input by a login user, determining note format parameters of the note content, and taking the note content and the note format parameters as note data;
and adding the time stamp and the note data into the first target video note, and generating a note entry corresponding to the adding operation.
Specifically, the first video mark progress may refer to a playing progress time in the target video, and the first video mark progress corresponding to the adding operation in the target video may refer to a playing progress time of the target video when the adding operation is performed, such as 90 seconds, 120 seconds, and the like. A timestamp is data that describes an arbitrary playing time in the video playing, and is usually shown as "time: dividing into: the second + video title "format, for ease of reading, is" 01. Additionally, the note content may include text and/or pictures.
In practical applications, the data structure of the video note may be an array structure, each element in the array represents a time stamp recorded by a user or note data with style information, and the time stamp may include a first video stamp progress, a video title, and a video identifier, where the first video stamp progress is used to describe a playing progress time when adding the record, the video title is used to show a video to which a current time stamp belongs in the multi-video note content, and the video identifier is used to obtain the video to which the time stamp belongs. The note data may include note specific content entered by the user, as well as note format parameters describing font size, color, and background color.
In addition, the login user can input the note content in the text form, and adds corresponding playing progress time (first video marking progress) for the note content in the text form through a control for adding a time mark; or the login user can upload the picture as note content and add corresponding playing progress time for the picture through a control added with a time mark; or the login user can directly capture the target video, the capture is used as the note content, and the playing progress moment corresponding to the capture is obtained and used as the first video marking progress.
In an optional implementation manner of this embodiment, the timestamp may further include a status identifier of the timestamp in addition to the first video marker progress, the video title, and the video identifier, where the status identifier is used to indicate whether the video to which the timestamp belongs is valid, that is, the timestamp further includes the status identifier; after the video title and the video identifier of the target video are obtained, the method further comprises the following steps:
and setting the state flag of the time mark as valid.
It should be noted that, during the playing process of the target video, when a new note entry is added to the first target video note based on the target video, the target video can be normally played, and thus the status flag of the added timestamp should be valid. In the subsequent process, if the target video is deleted, that is, the target video cannot be normally played through the video identifier of the target video, the state identifier of the time mark of the note entry related to the target video may be modified to be invalid. Therefore, whether the video to which the time mark belongs is still valid, namely whether the video can be played normally can be determined simply and quickly through the state identification of the time mark.
In an optional implementation manner of this embodiment, a data structure of the video note may be a JSON (lightweight data exchange format) structure, where the JSON structure may include an associated video list, a note title, a note text, and the like, that is, each video note includes a list of video notes associated with itself, and it may be determined through the list whether the first target video note is associated with the target video, that is, the first target video note includes an associated video list; after the playing interface of the target video displays at least one video note, the method further comprises the following steps:
determining whether the target video is a video in the associated video list or not according to the video identifier of the target video;
if so, determining that the first target video pen is marked as a video note associated with the target video;
if not, determining that the first target video pen is marked as a video note which is not associated with the target video.
It should be noted that, if the video identifier of the target video is the video identifier in the associated video list, it is indicated that the first target video pen is marked as the video note associated with the target video; and if the video identifier of the target video is not the video identifier in the associated video list, it is indicated that the first target video pen is marked as a video note which is not associated with the target video.
In practical applications, the video note may include an associated video identifier list, a note title, and a note body content, the note body content may include note data and a time stamp, the note data may include the note content input by the user and note format parameters describing font size, color, and background, and the time stamp may include a corresponding video identifier, a first video stamp progress, a corresponding video title, and a status identifier.
For example, fig. 4 is a schematic data structure diagram of a video Note provided in an embodiment of the present application, and as shown in fig. 4, a Note identifier (Note ID) of a plurality of video notes may be included under one video identifier (video ID, e.g. video _ ID _ 01), each Note identifier includes a list of associated video identifiers (e.g., video _ ID _01, video _id _, video _, ID _, 03, \8230;), note Title (Note Title), note body Content (Content) including Note data including Note Content entered by the user (e.g., "insert": content XX), and Note format parameters (e.g., "attributes": size ": 16px", "bold": true } for describing font size, color, and background, and the time stamp includes a corresponding video identification (video ID, e.g., video ID _ 01), a first video stamp progress (play progress time, e.g., 60), a corresponding video Title (e.g., title XX), and a status identification (whether the belonging video is valid, e.g., true).
In practical applications, the video note editing function can be provided through a rich text editor, which is a text editor that can be embedded in a browser and can support setting of various text formats, such as font size, color, and the like.
In specific implementation, a time mark adding button can be provided on a tool bar at the top of the rich text editor, a login user (namely a creator) can click in any video playing process of a target video, the second number (namely the first video mark progress) of the current playing time of a player is obtained when the login user clicks, and the second number is stored in a data structure field second of a time mark; then, acquiring the unique source ID of the currently played target video, storing the unique source ID into the unique ID of the video source of the data structure of the time mark, and setting the state mark of the time mark to be effective; and acquiring the video title of the currently played target video, and storing the video title into the video title of the time-marked data structure. Secondly, the obtained second may be analyzed as "time: dividing into: second, creating a < div > tag, adding each content contained in the data structure of the time mark to the attribute of the < div > tag, inserting the formatted second into the display content of the < div > tag, acquiring the cursor position of the login user in the rich text editor, inserting the created < div > node carrying the time mark information into the cursor position, and displaying the < div > node in the rich text editor.
For example, fig. 5 is a schematic diagram of a generation process of a video note provided in an embodiment of the present application, and as shown in fig. 5, a time stamp obtaining button is clicked, and then a playing video title, a playing video source ID, and a current playing second (for example, 90 seconds, which can be formatted to obtain 01. Thereafter, a time stamp div is generated and inserted into the rich text marker.
In an optional implementation manner of this embodiment, after the playing interface of the target video displays at least one video note, the user may also create a new video note for the target video without selecting an existing video note, that is, after the playing interface of the target video displays at least one video note, the method further includes:
under the condition that a note creating operation is detected, creating a second target video note and generating a note identifier of the second target video note;
under the condition that a time mark adding operation is detected, determining a corresponding second video mark progress of the time mark adding operation in the target video, and adding the second video mark progress to the second target video note;
and storing the incidence relation between the note identification of the second target video note and the video identification of the target video.
Specifically, the note creating operation is an operation triggered by a preset note creating control, and when the note creating operation is detected, it indicates that the login user wants to create a new video note, and records note content related to the target video, so that a second target video note can be created at this time, and a note identifier of the second target video note is generated, where the second target video note is a blank and new video note.
In addition, the timestamp adding operation may refer to an operation of inserting content into a newly created second target video note, and in a case that the timestamp adding operation is detected, it indicates that the user wants to insert a corresponding playing time into the newly created second target video note for recording, so that a corresponding second video tag progress of the timestamp adding operation in the target video may be determined at this time, and the second video tag progress is added into the second target video note.
In practical applications, after the second target video note is created for the target video, the association relationship between the second target video note and the target video needs to be stored correspondingly, so as to associate the second target video note with the target video.
In an optional implementation manner of this embodiment, after the play interface of the target video displays at least one video note, the user may delete a note entry included in the displayed at least one video note except that a new video note is created for the target video or a new note entry is added to an existing video note, that is, after the play interface of the target video displays at least one video note, the method further includes:
receiving a deletion operation for a third target video note, and deleting a note entry indicated by the deletion operation in the third target video note;
judging whether a first video identifier is included in a second video identifier, wherein the first video identifier is a video identifier corresponding to the deleted note entry, and the second video identifier is a video identifier corresponding to the remaining note entries of the third target video note;
and if not, deleting the first video identifier in the association relationship between the note identifier and the video identifier of the third target video note.
Specifically, the third target video note is any one of the video notes created by the login user, that is, the third target video note is any one of the first video notes. The deletion operation refers to an operation of deleting a note entry in a third target video note after the login user selects the third target video note from the displayed at least one video note, for example, the deletion operation may be an operation triggered by a deletion control of a note entry after entering the third target video note.
It should be noted that, after deleting the note entry indicated by the deletion operation, it is necessary to determine the first video identifier corresponding to the deleted note entry, and determine whether the third target video note further includes the note entry corresponding to the first video identifier, so as to determine whether the third target video note is further associated with the video corresponding to the first video identifier after deleting the note entry. If the second video identifications corresponding to the remaining note entries of the third target video note do not include the first video identification corresponding to the deleted note entry, it is indicated that the third target video note is unrelated to the video corresponding to the first video identification, and at this time, the first video identification can be deleted in the original association relationship between the note identification and the video identification of the third target video note, so that the third target video note is unrelated to the video corresponding to the first video identification.
For example, the third target video note is a video note 1, which includes 3 note entries, where the note entry 1 corresponds to a video a, the note entry 2 corresponds to a video B, and the note entry 3 corresponds to a video C, and at this time, the association relationship between the note identifier and the video identifier of the video note 1 is shown in table 1 below. Assuming that the user deletes the note entry 2 after selecting the video note 1, since the remaining note entries 1 and 3 are unrelated to the video B after deleting the note entry 2, the video B in the following table 1 is deleted, and at this time, the association relationship between the note identifier and the video identifier of the video note 1 is shown in the following table 2.
Table 1 table of association between note identifier and video identifier of video note 1
Table 2 updated table of association relationship between note identifier and video identifier of video note 1
The embodiment of the application provides a data structure of a video note, the data structure can support a note creator to add a time mark and note data in a self-defined mode, and the time mark can support a note viewer to use a time mark function to quickly position a marked time. Therefore, the method and the device can support a note creator to custom generate, create, modify and delete the related content of the note, so that the related content can be stored and shared more conveniently without secondary modification and editing of the video.
In practical application, the video identifier of the target video is added in the association relationship between the note identifier and the video identifier of the first target video note, which may be adding the corresponding relationship between the note identifier of the first target video note and the video identifier of the target video in the association relationship between the note identifier and the video identifier of the original text.
In an optional implementation manner of this embodiment, if the first target video note stores each video associated with the first target video note in the form of an associated video list, the associated video list may be updated, that is, the video identifier of the target video is added to the association relationship between the note identifier and the video identifier of the first target video note, and a specific implementation process may be as follows:
adding the video identification of the target video to the associated video list of the first target video note.
For example, assuming that the first target video note is a video note 3 in which note entries related to video a, video B and video C are stored, that is, the associated video list of the video note 3 is shown in table 3 below, and assuming that the target video is a video D, after a note entry corresponding to the video D is generated in the video note 3, the video D may be added in the association relationship shown in table 3 below, so as to obtain an updated associated video list of the video note 3 shown in table 4 below.
TABLE 3 associated video List for video Note 3
TABLE 4 updated associated video List for video Note 3
In the embodiment of the application, one of the video notes displayed in the playing interface of the target video can be selected at will, the selected video note can be not associated with the currently played target video, then the note content to be recorded is added into the selected video note, and the video identifier of the target video is added in the association relationship between the original note identifier and the video identifier of the first target video note, so that the first target video note can correspond to more than one video identifier, namely the first target video note can comprise the note content of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up user can conveniently view and operate the video note, and the great flexibility is improved.
In an optional implementation manner of this embodiment, in the process of watching the target video, a user may view each video note displayed on the play interface of the target video, and the user may skip to the corresponding video and play at the time mark by clicking a certain time mark in a certain video note, that is, after at least one video note is displayed on the play interface of the target video in this embodiment, the method may further include:
under the condition that the selection operation of a target note item aiming at a fourth target video note is detected, analyzing the target note item, and obtaining a target time mark included by the target note item;
determining whether the video corresponding to the target note entry is the target video or not according to the target time mark;
under the condition that the video corresponding to the target note item is not the target video, switching the currently played target video to the video corresponding to the target note item;
and adjusting the playing progress of the video corresponding to the target note item according to the target time mark.
In practical applications, the fourth target video note may be a video note created by the login user or any one of video notes created by other users, that is, the fourth target video note may be any one of the first video note or the second video note. The target note entry is a note entry selected from the displayed detailed information after the user clicks into the fourth target video note.
It should be noted that, after a user selects a certain note entry, the note entry may be parsed to obtain a corresponding target time stamp, and since the note entry selected by the user is not necessarily a note corresponding to a currently played video, it may also be determined whether the note entry selected by the user is a note corresponding to the currently played video based on the obtained target time stamp, and if not, a video corresponding to the note entry selected by the user is skipped to, and then the playing progress of the skipped video is adjusted.
In an optional implementation manner of this embodiment, when a note entry is generated, a corresponding video identifier is added to a time stamp of a video note, that is, the obtained target time stamp should include a corresponding video identifier, and at this time, according to the target time stamp, it is determined whether a video corresponding to the target note entry is the target video, where a specific implementation process may be as follows:
determining a video identification included by the target timestamp;
and under the condition that the video identification included by the target time mark is different from the video identification of the target video, determining that the video corresponding to the target note entry is not the target video.
It should be noted that, if the video identifier included in the target time stamp is different from the video identifier of the target video, it is indicated that the video corresponding to the target note entry is not the target video, and at this time, the video corresponding to the video identifier included in the target time stamp should be skipped to first, and then the playing progress is adjusted.
In an optional implementation manner of this embodiment, when a note entry is generated, a corresponding video tagging progress is recorded in a time tag of a video note, that is, a playing progress time is recorded, that is, an obtained target time tag should include a target video tagging progress, and at this time, according to the target time tag, a playing progress of a video corresponding to the target note entry is adjusted, where a specific implementation process may be as follows:
determining a target video marking progress included by the target time mark;
and adjusting the playing progress of the video corresponding to the target note entry to the target video marking progress.
For example, assuming that a currently played video is a video C, a user selects a note entry 1 in a video note 4 from video notes shown on a playing interface of the video C, and assuming that the note entry 1 is a note corresponding to the video a, and a time stamp of the note entry 1 includes a target video stamp progress rate of "01:30", at this time, the currently played video C may be switched to the video a, and then the playing progress of the video a may be adjusted to a position of 1 minute and 30 seconds.
For example, fig. 6 is a schematic diagram of a single video note mapping multiple videos provided by an embodiment of the present application, and as shown in fig. 6, a note creator views a video a, a video B, and a video C, and adds text content (i.e., note content) and a time stamp in a rich text editor. Assuming that the added timestamps include timestamp a (02); timestamp B1 (00; timestamp B2 (01. The note reader can read the text content and click on the time stamp, when clicking on time stamp a, to video a at 02.
In an optional implementation manner of this embodiment, when generating the note entry, a state identifier is set in the timestamp, that is, the obtained target timestamp should include the state identifier. Because some videos may be deleted due to time, content, and the like, and after a certain video is deleted, the corresponding video cannot be found based on the video identifier, so as to be played, if the corresponding video cannot be found according to the video identifier, the state identifier in the timestamp can be modified, thereby avoiding performing useless finding operation next time, that is, switching the currently played target video to the video corresponding to the target note entry, including:
searching a corresponding video to be jumped according to the video identification included by the target time mark;
and under the condition that the video to be skipped is found, switching the currently played target video into the video to be skipped.
It should be noted that, when the video to be skipped is found, the video to be skipped can be played normally without modifying the state identifier in the time stamp, at this time, the currently played target video can be directly switched to the video to be skipped, and the playing progress is subsequently adjusted.
In addition, under the condition that the video to be jumped cannot be found, the state identification in the target time mark can be updated to be invalid, and an error prompt is displayed. That is to say, under the condition that the video to be skipped is not found, it is indicated that the video to be skipped is abnormal and cannot be played, at this time, the state identifier in the target time stamp can be updated to be invalid, and an abnormal prompt is displayed. The target note entry can be set to be in a transition state of only showing and not jumping subsequently, so that useless clicking and useless analysis and processing operation are avoided.
In an optional implementation manner of this embodiment, the time stamp may include a state identifier, and it may be determined whether a video to which the time stamp belongs is still valid through the state identifier, and according to the state identifier, it may be determined whether to perform subsequent video search and jump operations, that is, according to the target time stamp, it is determined whether a video corresponding to the target note entry is before the target video, and the method may further include:
acquiring a state identifier included in the target time mark;
and executing the operation step of determining whether the video corresponding to the target note entry is the target video or not under the condition that the state mark included in the target time mark is valid.
It should be noted that, under the condition that the status flag included in the target timestamp is valid, the operation step of determining whether the video corresponding to the target note entry is the target video is executed again, so that redundant operations that are not used are prevented from being executed too much, and processing resources are saved.
According to the video note generating method, the corresponding note content can be directly checked and/or recorded in the playing interface of the target video through the video note, the video related note is not required to be recorded through other notebooks or office software, and note recording efficiency in the video playing process is greatly improved. In addition, one video note can be selected from the video notes displayed in the playing interface of the target video, the selected video note can be not associated with the currently played target video, and then the video identifier of the target video is added in the association relation between the original note identifier and the video identifier of the selected video note, so that the selected video note can correspond to more than one video identifier, namely the selected video note can comprise note contents of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up viewing and operation of a user are facilitated, and the great flexibility is improved.
Moreover, in the embodiment of the application, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the experience of watching the video is improved.
In the following, with reference to fig. 7, the video note generating method provided by the present application is further described by taking an application of the video note generating method in a cooking teaching video as an example. Fig. 7 shows a processing flow chart of a video note generation method applied to a cooking teaching video according to an embodiment of the present application, which specifically includes the following steps:
step 702: and displaying at least one video note on a playing interface of the family dish teaching video.
The video note may include at least one of a video note created by the user who logs in at present and a video note created by other users, and the video note may be a video note associated and/or unassociated with the family vegetable teaching video.
Step 704: in the case that an adding operation for a first target video note is detected, a note entry corresponding to the adding operation is generated in the first target video note.
In practical application, a first video marking progress corresponding to the adding operation in the family dish teaching video can be determined, a video title and a video identifier of the family dish teaching video are obtained, and the first video marking progress, the video title and the video identifier are used as time marks; then, acquiring note content input by a login user, determining note format parameters of the note content, and taking the note content and the note format parameters as note data, wherein the note content comprises texts and/or pictures; and then, adding the time stamp and the note data into the first target video note, and generating a note entry corresponding to the adding operation.
Step 706: and under the condition that the first target video pen is marked as a video note which is not associated with the family dish teaching video, adding the video identifier of the family dish teaching video in the association relationship between the note identifier and the video identifier of the first target video note.
Step 708: in a case that a note creation operation is detected, a second target video note is created, and a note identification of the second target video note is generated.
Step 710: and under the condition that a time mark adding operation is detected, determining a second video mark progress corresponding to the time mark adding operation in the family dish teaching video, and adding the second video mark progress to the second target video note.
Step 712: and storing the association relationship between the note identification of the second target video note and the video identification of the family dish teaching video.
Step 714: receiving a deletion operation for a third target video note, and deleting a note entry indicated by the deletion operation in the third target video note.
Step 716: judging whether a second video identifier comprises a first video identifier, wherein the first video identifier is a video identifier corresponding to a deleted note entry, and the second video identifier is a video identifier corresponding to a remaining note entry of the third target video note;
step 718: and if not, deleting the first video identifier in the association relationship between the note identifier and the video identifier of the third target video note.
Step 720: in the case that a selection operation of a target note entry for a fourth target video note is detected, the target note entry is parsed, and a target time stamp included in the target note entry is obtained.
Step 722: and determining whether the video corresponding to the target note entry is the family dish teaching video or not according to the target time mark, and switching the currently played family dish teaching video to the video corresponding to the target note entry under the condition that the video corresponding to the target note entry is not the family dish teaching video.
Step 724: and adjusting the playing progress of the video corresponding to the target note item according to the target time mark.
The video note generation method provided by the application can directly record corresponding note contents in the playing interface of the home dish teaching video without recording related notes of the video through other notebooks or office software, and greatly improves the note recording efficiency in the video playing process. In addition, one of the video notes displayed in the playing interface of the family dish teaching video can be selected at will, the selected video note can be not associated with the currently played family dish teaching video, then the note content to be recorded is added into the selected video note, and the video identification of the family dish teaching video is added in the association relationship between the original note identification and the video identification of the selected video note, so that the selected video note can correspond to more than one video identification, namely the selected video note can comprise the note contents of different videos, therefore, the note contents of different videos can be recorded in the same video note, the subsequent user can conveniently view and operate the video note, and the great flexibility is improved.
Moreover, in the embodiment of the application, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the experience of watching the video is improved.
Corresponding to the above method embodiment, the present application further provides a video note generating apparatus embodiment, and fig. 8 shows a schematic structural diagram of a video note generating apparatus provided in an embodiment of the present application. As shown in fig. 8, the apparatus includes:
a display module 802 configured to display at least one video note on a play interface of a target video;
an adding module 804 configured to, in a case that a first target video note is a video note which is not associated with the target video, add a video identifier of the target video in an association relationship between a note identifier and a video identifier of the first target video note in response to an association operation for the first target video note.
Optionally, the display module 802 is further configured to:
acquiring at least one first video note created by a login user according to a user identifier of the login user, wherein the at least one first video note comprises video notes associated and/or not associated with the target video;
and displaying the at least one first video note on a playing interface of the target video.
Optionally, the display module 802 is further configured to:
acquiring a corresponding second video note according to the video identifier and the video type of the target video, recording the second video note as user creation different from the login user, and recording the second video note to comprise video notes associated with and/or not associated with the target video;
displaying the at least one first video note and the second video note on a playing interface of the target video.
Optionally, the apparatus further comprises a generating module configured to:
in the case that an adding operation aiming at a first target video note is detected, generating a note entry corresponding to the adding operation in the first target video note, or adding note content of the target video in the note entry corresponding to the adding operation;
determining that the association operation is detected.
Optionally, the generation module is further configured to:
determining a first video marking progress corresponding to the adding operation in the target video, acquiring a video title and a video identifier of the target video, and taking the first video marking progress, the video title and the video identifier as time marks;
acquiring note content input by a login user, determining note format parameters of the note content, and taking the note content and the note format parameters as note data;
and adding the time stamp and the note data into the first target video note, and generating a note entry corresponding to the adding operation.
Optionally, the time stamp further comprises a status identifier; the generation module is further configured to:
and setting the state flag of the time mark as valid.
Optionally, the apparatus further comprises a creating module configured to:
under the condition that a note creating operation is detected, creating a second target video note and generating a note identifier of the second target video note;
under the condition that a time mark adding operation is detected, determining a second video mark progress corresponding to the time mark adding operation in the target video, and adding the second video mark progress to a second target video note;
and storing the association relationship between the note identification of the second target video note and the video identification of the target video.
Optionally, the first target video note includes an associated video list; the add module 804 is further configured to:
determining whether the target video is a video in the associated video list or not according to the video identifier of the target video;
if so, determining that the first target video pen is marked as a video note associated with the target video;
if not, determining that the first target video pen is marked as a video note not associated with the target video.
Optionally, the adding module 804 is further configured to:
adding the video identification of the target video to the associated video list of the first target video note.
Optionally, the apparatus further comprises a deletion module configured to:
receiving a deletion operation for a third target video note, and deleting a note entry indicated by the deletion operation in the third target video note;
judging whether a first video identifier is included in a second video identifier, wherein the first video identifier is a video identifier corresponding to the deleted note entry, and the second video identifier is a video identifier corresponding to the remaining note entries of the third target video note;
and if not, deleting the first video identifier in the association relationship between the note identifier and the video identifier of the third target video note.
Optionally, the apparatus further comprises a switching module configured to:
under the condition that the selection operation of a target note item aiming at a fourth target video note is detected, analyzing the target note item, and obtaining a target time mark included by the target note item;
determining whether the video corresponding to the target note entry is the target video or not according to the target time mark;
under the condition that the video corresponding to the target note item is not the target video, switching the currently played target video into the video corresponding to the target note item;
and adjusting the playing progress of the video corresponding to the target note item according to the target time mark.
Optionally, the switching module is further configured to:
determining a video identification included by the target timestamp;
and under the condition that the video identification included by the target time mark is different from the video identification of the target video, determining that the video corresponding to the target note entry is not the target video.
Optionally, the switching module is further configured to:
determining a target video marking progress included by the target time mark;
and adjusting the playing progress of the video corresponding to the target note entry to the target video marking progress.
Optionally, the time stamp further comprises a status identifier; the switching module is further configured to:
acquiring a state identifier included in the target time mark;
and executing the operation step of determining whether the video corresponding to the target note entry is the target video or not under the condition that the state mark included in the target time mark is valid.
Optionally, the switching module is further configured to:
searching a corresponding video to be jumped according to the video identification included by the target time mark;
and under the condition that the video to be skipped is found, switching the currently played target video into the video to be skipped.
According to the video note generating device, the corresponding note content can be directly checked and/or recorded in the playing interface of the target video through the video note, the video related note is not required to be recorded through other notebooks or office software, and note recording efficiency in the video playing process is greatly improved. In addition, one video note can be selected from the video notes displayed in the playing interface of the target video, the selected video note can be not associated with the currently played target video, and then the video identifier of the target video is added in the association relation between the original note identifier and the video identifier of the selected video note, so that the selected video note can correspond to more than one video identifier, namely the selected video note can comprise note contents of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up viewing and operation of a user are facilitated, and the great flexibility is improved.
Moreover, in the embodiment of the application, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the experience of watching the video is improved.
The foregoing is a schematic scheme of a video note generating apparatus according to this embodiment. It should be noted that the technical solution of the video note generating apparatus and the technical solution of the video note generating method belong to the same concept, and details that are not described in detail in the technical solution of the video note generating apparatus can be referred to the description of the technical solution of the video note generating method.
Fig. 9 illustrates a block diagram of a computing device 900 provided in accordance with an embodiment of the present application. Components of the computing device 900 include, but are not limited to, a memory 910 and a processor 920. The processor 920 is coupled to the memory 910 via a bus 930, and a database 950 is used to store data.
Computing device 900 also includes access device 940, access device 940 enabling computing device 900 to communicate via one or more networks 960. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 940 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE902.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 900 and other components not shown in FIG. 9 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 9 is for purposes of example only and is not intended to limit the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 900 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 900 may also be a mobile or stationary server.
The processor 920 is configured to execute the following computer-executable instructions to implement the operation steps of the video note generation method.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video note generation method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video note generation method.
An embodiment of the present application also provides a computer-readable storage medium, which stores computer-executable instructions, which are executed by a processor to implement the operation steps of the video note generating method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the video note generation method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the video note generation method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying said computer program code, a recording medium, a usb-disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently considered to be preferred embodiments and that acts and modules are not required in the present application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.