Movatterモバイル変換


[0]ホーム

URL:


CN118890513B - Video playing method, device, equipment, medium and product - Google Patents

Video playing method, device, equipment, medium and product

Info

Publication number
CN118890513B
CN118890513BCN202411149517.7ACN202411149517ACN118890513BCN 118890513 BCN118890513 BCN 118890513BCN 202411149517 ACN202411149517 ACN 202411149517ACN 118890513 BCN118890513 BCN 118890513B
Authority
CN
China
Prior art keywords
video
character
role
playing
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411149517.7A
Other languages
Chinese (zh)
Other versions
CN118890513A (en
Inventor
林婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN202411149517.7ApriorityCriticalpatent/CN118890513B/en
Publication of CN118890513ApublicationCriticalpatent/CN118890513A/en
Application grantedgrantedCritical
Publication of CN118890513BpublicationCriticalpatent/CN118890513B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请为中国申请202210225472.1的分案申请。本申请公开了一种视频播放方法、装置、设备、介质及产品,涉及多媒体技术领域。该方法包括:响应于对目标视频的搜索操作,在目标视频的视频内容介绍界面中显示目标视频对应的角色关系图谱,角色关系图谱包括与第一角色对应的第一角色控件;响应于接收到对第一角色控件的触发操作,播放第一角色关联的所述第一视频片段;响应于第一视频片段的播放画面中出现第二角色,显示第二角色对应的第二角色控件,第二角色与第一角色之间存在角色关联关系;响应于接收到对第二角色控件的触发操作,播放第二角色关联的至少一个第二视频片段。便于用户快速理解目标视频中各个角色之间的角色关系,提高理解目标视频剧情的效率。

This application is a divisional application of Chinese application 202210225472.1. This application discloses a video playback method, device, equipment, medium and product, which relate to the field of multimedia technology. The method includes: in response to a search operation for a target video, displaying a role relationship map corresponding to the target video in the video content introduction interface of the target video, the role relationship map including a first role control corresponding to the first role; in response to receiving a trigger operation on the first role control, playing the first video clip associated with the first role; in response to the appearance of a second character in the playback screen of the first video clip, displaying a second role control corresponding to the second character, and there being a role association relationship between the second character and the first character; in response to receiving a trigger operation on the second role control, playing at least one second video clip associated with the second character. It is convenient for users to quickly understand the role relationship between the characters in the target video and improve the efficiency of understanding the plot of the target video.

Description

Video playing method, device, equipment, medium and product
The application relates to a division application of patent application with the application number 202210225472.1 and the application name of video playing method and device, which is filed on the year 2022, the month 03 and the day 09.
Technical Field
The embodiment of the application relates to the technical field of multimedia, in particular to a video playing method, a device, equipment, a medium and a product.
Background
Video applications are typically provided with a variety of functions to facilitate efficient viewing of video, such as a double-speed play function, a head-to-tail skip function, a designated character viewing function, and the like.
In the related art, after the designated character viewing function is started, only the segment of the designated character in the video is played, for example, when the video A is viewed, the character b in the video A is selected to be viewed independently, and only the video segment corresponding to the character b is played in the playing process of the video A.
However, in the designated role viewing function, the problem of dislocation of the video content related to the viewed roles and other roles in the video is easily caused, so that the user cannot understand the relationship between the roles in the movie work in time, multiple plays are required for confirmation, the man-machine interaction efficiency is low, and the data interaction amount between the server and the terminal is large.
Disclosure of Invention
The embodiment of the application provides a video playing method, a device, equipment, a medium and a product, which can improve the efficiency of understanding video clip story lines to a certain extent. The technical scheme is as follows.
In one aspect, a video playing method is provided, and the method includes:
responding to search operation of a target video, and displaying a role relation graph corresponding to the target video in a video content introduction interface of the target video, wherein the role relation graph is used for indicating the association relation among roles appearing in the target video, and comprises a first role control corresponding to a first role;
Responsive to receiving a trigger operation for the first character control, playing the first video clip associated with the first character;
Responding to the occurrence of a second role in a playing picture of the first video segment, displaying a second role control corresponding to the second role, wherein a role association relationship exists between the second role and the first role, and the second role control is used for playing the second video segment associated with the second role in the target video;
and in response to receiving the triggering operation of the second role control, playing at least one second video clip associated with the second role.
In another aspect, there is provided a video playing device, the device comprising:
The display module is used for responding to the search operation of the target video, displaying a role relation graph corresponding to the target video in a video content introduction interface of the target video, wherein the role relation graph is used for indicating the association relation among roles appearing in the target video, and comprises a first role control corresponding to a first role;
the playing module is used for responding to the received triggering operation of the first character control and playing the first video clip associated with the first character;
the display module is further configured to display a second role control corresponding to a second role in response to occurrence of the second role in a play picture of the first video segment, where a role association relationship exists between the second role and the first role, and the second role control is used to play the second video segment associated with the second role in the target video;
and the playing module is also used for responding to the received triggering operation of the second role control and playing at least one second video clip associated with the second role.
In another aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, where the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by the processor to implement a video playing method according to any one of the embodiments of the present application.
In another aspect, a computer readable storage medium is provided, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored, where the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by a processor to implement a video playing method according to any one of the embodiments of the present application.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the video playback method according to any one of the above embodiments.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
When a first video clip corresponding to a first character is watched, an entry control for jumping to play the second video clip corresponding to the second character is provided in a play picture by identifying the second character in the first video clip, so that a user can understand the character relation among the characters in a target video quickly, the efficiency of the user in understanding the scenario of the target video is improved, and in addition, other content associated with the target video can be further expanded by automatically analyzing the character relation among the characters.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an interface schematic diagram of a character relationship graph provided in an exemplary embodiment of the application;
FIG. 2 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
Fig. 3 is a flowchart of a video playing method according to an exemplary embodiment of the present application;
Fig. 4 is an interface schematic diagram corresponding to a video playing method according to an exemplary embodiment of the present application;
fig. 5 is a flowchart of a video playing method according to another exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of an interface for displaying a first character control provided based on the embodiment shown in FIG. 5;
FIG. 7 is a schematic diagram of an interface for displaying a first character control provided based on another exemplary embodiment shown in FIG. 5;
FIG. 8 is a schematic illustration of an interface display provided based on another exemplary embodiment shown in FIG. 5;
Fig. 9 is a flowchart of a video playing method according to another exemplary embodiment of the present application;
FIG. 10 is an interface diagram of a return control displayed at an interface provided based on one exemplary embodiment shown in FIG. 9;
fig. 11 is a flowchart of a video playing method according to another exemplary embodiment of the present application;
fig. 12 is a block diagram of a video playback apparatus according to an exemplary embodiment of the present application;
fig. 13 is a block diagram illustrating a video playback apparatus according to another exemplary embodiment of the present application;
fig. 14 is a block diagram of a server according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
First, a brief description will be given of terms involved in the embodiments of the present application and applied technologies.
The target video is used for indicating video clips to be played, the target video comprises but not limited to at least one of short videos, film and television drama works and documentaries, the target video comprises at least two roles, each role corresponds to a story line of the target video, and each role has a role association relationship, multiple relationships possibly exist among the roles, in the embodiment of the application, only the direct relationship among the roles is analyzed, for example, a certain target video comprises a main angle a, a main angle b and an accessory angle c, the three roles can have a bidirectional role association relationship with each other, or a unidirectional role association relationship can also have no role association relationship, the main angle a and the main angle b are schematic lovers, the accessory angle c is a dark lover of the main angle a, the main angle a and the main angle b belong to the bidirectional role association relationship, the accessory angle c and the main angle a belong to the unidirectional role association relationship, and the main angle b and the accessory angle c do not have the role association relationship, in the embodiment of the application, the important role association relationship in the target video can be determined, and the depth analysis is provided based on the depth analysis of the video clips.
The character relation map is used for indicating the character relation among all the characters in the target video, as shown in fig. 1, a character relation map 100 corresponding to the target video is displayed in a terminal interface, the character relation map 100 comprises characters 101 to 104, the character relation map 100 also displays the association relation among the characters, and can be seen from line segments with arrows shown in fig. 1, optionally, character information corresponding to all the characters is displayed in the character relation map 100, the character information comprises but not limited to the name of the character decorated by the character in the target video, the identity information of the character in the target video, the dubbing character information corresponding to the character decorated by the character, and the character information decorated by the character, and the detailed content of the character relation map is shown in the following description.
The method comprises the steps of triggering a role control to play video segments related to roles, wherein the video segments can be arranged and aggregated according to the development sequence of the story lines, namely, a video segment list corresponding to the roles is displayed in a play interface, the video segment list is sequentially arranged according to the development sequence of the story lines, the introduction content corresponding to the presence of the introduction content is displayed, and the introduction content can be used for indicating and summarizing events which occur corresponding to the video segments, such as that a role a corresponds to the presence of a video segment 1, a video segment 2 and a video segment 3, the video segments corresponding to the play role a are displayed on the interface of the video segments, the list comprises titles corresponding to the video segments, newly-appearing other role names and the like, and the introduction content is displayed on the side of a new role 1, or the introduction content is displayed on the side of a new role b, and the like, wherein the introduction content is shown on the side of the new role 1.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI for short), which is a theory, method, technique, and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of how to "look" a machine. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
Secondly, the description is performed on the application scene related to the embodiment of the application by combining the noun description:
first, apply to video viewing scenes;
When a user watches a target video in a terminal through a browser or a player, acquiring a role relation map corresponding to the target video, wherein the target video can be at least one of a short video which is independently created, a short video which is obtained through secondary editing, a movie episode and a documentary;
In the embodiment of the application, the head portraits corresponding to each role displayed in the role relation graph are used for playing video segments with preset relation, when the clicking operation of the head portraits corresponding to the first role in the role relation graph is received, the terminal acquires and aggregates at least one first video segment corresponding to the role, the aggregated at least one first video segment is displayed in a current display interface, a user selects the first video segment from the at least one first video segment for playing, when other roles appear in a playing picture of the first video segment, a current picture frame is extracted for role recognition, the role relation between a newly appearing second role and the first role is determined, and a second role control corresponding to the second role is displayed in the current interface based on the determined role relation, the user realizes the effect of quickly switching from the first video segment corresponding to the second role to the second video segment corresponding to the second role by clicking the second role control, automatically cards the role relation and the story segments (the first video segment and the second video segment) corresponding to the roles among the roles in the target video, the user is helped to quickly understand the target video, and the user is further improved in the target video line watching efficiency. The role relation map may be obtained after analysis of the target video, or may be preset.
Secondly, the method is applied to game analysis scenes;
In some games, a plurality of virtual roles exist, a preset role association relationship exists among the virtual roles, when a user controls a master virtual role to play a game in the game application process, role association information between the newly-appearing virtual role and the master virtual role is determined through related information displayed in the game, and along with the time of the game, the user can not clearly memorize the role association relationship between each virtual role and the master virtual role, so that the user is required to review the history game process again, and the efficiency of the user for understanding the game story line is reduced;
In the embodiment of the application, when the virtual object newly appears in the game process, the terminal stores the role association relation between the virtual object newly appearing and the main control virtual object, and displays the head portrait entry control of the virtual object newly appearing in the current interface, wherein the head portrait entry control is used for displaying the role description content between the virtual object newly appearing and the main control virtual object, thereby effectively improving the efficiency of understanding the game story line for the user and improving the man-machine interaction efficiency.
In the embodiment of the application, the video playing method provided by the embodiment can also be applied to the field of artificial intelligence for automatic analysis, and the application is not limited.
The foregoing is merely an exemplary example, and the video playing method provided in the embodiment of the present application may also be applied to other application scenarios, which is not limited in this disclosure.
Finally, the implementation environment of the embodiment of the application is described by combining the noun description and the application scene.
Fig. 2 is a schematic diagram of an implementation environment provided by an embodiment of the present application, as shown in fig. 2, where the implementation environment includes a terminal 210 and a server 220, where the terminal 210 and the server 220 are connected through a communication network 230, and the communication network 230 may be a wireless network or a wired network, which is not limited in this application.
Optionally, the user searches the target video in a designated application program in the terminal 210, and displays a video content introduction page corresponding to the target video in the current interface, where the designated application program includes, but is not limited to, a browser, a player, and image recognition software, a role relation graph corresponding to the target video is displayed in the video content introduction page, and role controls of roles corresponding to the target video are displayed in the role relation graph;
the user clicks the first character control in the character relation graph, the terminal 210 sends a play request to the server 220, and the server 220 aggregates at least one first video segment corresponding to the first character in the target video according to the play request and feeds the aggregated at least one first video segment back to the terminal 210.
The terminal 210 displays at least one first video clip corresponding to the first character in the current interface according to the feedback information and plays the at least one first video clip according to the playing time sequence of the target video.
Optionally, when a second character appears in the playing screen of the first video clip, and there is a character association relationship between the second character and the first character, a second character control is displayed in the playing interface of the terminal 210.
In response to receiving the triggering operation (triggering the video switching request) on the second character control, the terminal 210 sends the video switching request to the server 220, and the server 220 obtains at least one second video segment corresponding to the second character in the target video according to the video switching request and feeds back the at least one second video segment to the terminal 210.
Optionally, the terminal 210 automatically plays at least one second video clip, and a return control is further displayed in a play frame for playing the second video clip, where the return control is used to switch the currently played second video clip to play the first video clip.
It should be noted that, the above-mentioned terminal 210 may be implemented as a mobile terminal such as a mobile phone, a tablet computer, a vehicle-mounted terminal, an intelligent home device, a wearable device, a portable laptop computer, or a desktop computer, which is not limited in this embodiment of the present application.
The server 220 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligence platform.
Cloud technology (Cloud technology) refers to a hosting technology that unifies serial resources such as hardware, software, networks and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
In some embodiments, the server 220 described above may also be implemented as a node in a blockchain system.
The above-mentioned implementation environments are only exemplary, and the video playing method provided in the embodiment of the present application may be applied to the terminal 210 alone or to the server 220 alone, which is not limited in this regard, but in the embodiment of the present application, the video playing method is applied to the combined implementation environment of the server 220 and the terminal 210.
In connection with the above-mentioned implementation environment, a video playing method according to an embodiment of the present application is described, and fig. 3 is a flowchart of a video playing method according to an exemplary embodiment of the present application, where the method is applied to a terminal for description, and in an embodiment of the present application, as shown in fig. 3, the method includes:
Step 301, a first character control is displayed.
In the embodiment of the present application, a user searches a target video by using a designated application program, a role relationship map corresponding to each role in the target video is displayed in a video content introduction interface of the target video, and role controls corresponding to each role in the target video and relationship description contents between each role are displayed in the role relationship map, where the designated application program may be a browser application program, a video playing application program, a live broadcast application program, a cloud disk storage application program, etc., and the relevant contents are described in step 501 and fig. 6, which are not described in detail herein.
In other embodiments, the user selects the target video to watch, a role relation graph is displayed in a playing interface of the target video, role controls corresponding to a plurality of roles are displayed in the role relation graph, or role controls corresponding to each role are directly displayed in the playing interface of the target video, where the target video includes but is not limited to at least one of a short video, a movie episode, and a documentary.
In an optional embodiment, the role relationship map is used for indicating the relationship between the roles in the target video, and may also be used for indicating the relationship between the roles in the core of the target video, where the role relationship map also displays the role names, the role identities and the direct association relationship between the roles corresponding to the roles, which is not limited in this application.
The character relation map comprises character controls corresponding to each character, wherein the first character control is used for indicating character information of the character in a target video, including but not limited to character information for personating the character, name of the character decorated in the target video, identity of the character decorated in the target video and character information dubbing the character screen.
In another optional embodiment, in an associated interface for displaying the target video, a role control list corresponding to a role included in the target video is displayed, where the role control list includes a first role control.
In the embodiment of the present application, the scene of displaying the first character control includes, but is not limited to, the following ways:
First, a first character control is displayed in a content introduction interface for displaying a target video, wherein the first character control is used for playing a first video segment associated with a first character in the target video, the content introduction interface is used for indicating an interface for introducing the target video, the interface comprises a content introduction of the target video, an image corresponding to the target video and character relation maps corresponding to each character in the target video, the character relation maps are displayed with character controls corresponding to each character, and the interface comprises the first character control corresponding to the first character.
And secondly, displaying the first character control in a playing interface for playing the target video, and optionally, displaying the character control corresponding to each character in a superposition manner in a playing picture of the target video, wherein the step of displaying the first character control comprises the steps of.
Thirdly, displaying the function control in a playing interface for playing the target video, and when a triggering operation on the function control is received, superposing and displaying a role control list on the playing interface, wherein the role control list comprises a first role control.
It should be noted that the display manner of the first character control is merely an illustrative example, which is not limited in this embodiment.
In the embodiment of the application, the first character control can be understood as a shortcut entry for playing the video clip associated with the first character.
In step 302, in response to receiving a triggering operation on the first character control, a first video clip associated with the first character is played.
Optionally, in response to receiving a triggering operation on the first character control, the triggering operation includes, but is not limited to, a clicking operation, a long-press operation, a dragging operation, a triggering operation input through an external device, and a voice control operation, and a first video clip associated with the first character is acquired. The association relationship is used for indicating the relationship between the video clips and the first character, and the association relationship can be that the video clips contain the first character, namely, the first video clips are video clips with the first character, or can be a forensic prompt clip for leading out the first character, namely, the first video clips have guiding effect on the appearance of the first character, or can be video clips with common events between other characters and the first character, namely, the first character does not appear in the first video clips, but the other characters and the first character execute the same events. In this embodiment, the video clips associated with the roles may also be referred to as video clips corresponding to the roles.
In the embodiment of the present application, the manner of obtaining the first video clip includes, but is not limited to:
The method comprises the steps of receiving a trigger operation of a first character control, sending a video acquisition request to a server in response to the trigger operation of the first character control, wherein the video acquisition request comprises related information of a first character, a character relation map corresponding to a target video, the target video and the like, acquiring first video fragments (the number of the first video fragments is greater than or equal to 1) containing the first character in the target video according to the received video acquisition request by the server, compressing and packaging the first video fragments, feeding the compressed and packaged first video fragments back to a terminal, and sequentially playing the first video fragments associated with the first character in a terminal interface according to the feedback of the server by the terminal.
Second, the terminal stores video clips associated with each role in the target video in advance, when the terminal receives triggering operation of the first role control, the terminal directly acquires the first video clip associated with the first role from the local according to the association relation among the first role control, the target video and the first role, and plays the first video clip.
Third, the video segments associated with each role in the target video are stored in the server in advance, when the terminal receives the triggering operation of the first role control, the terminal sends a video acquisition request to the server, directly acquires the first video segments associated with the first role from the server, feeds back the first video segments to the terminal, and plays the first video segments.
Fourth, the cloud storage system stores video segments associated with each role in the target video, specifically, see table 1 below, table 1 shows a storage relationship among the target video, the roles and the video segments stored in the cloud storage system, the video a includes a role a and a role b, the role a includes a video segment 1, the role b includes a video segment 2, when a user views the video a, after triggering a role control of the role a, the video segment 1 corresponding to the role a is directly acquired from the cloud storage system, the video segment 1 is fed back to the terminal, and the video segment 1 is played, alternatively, the video segments 1 to 7 may be implemented as a set of video sub-segments, for example, the video segment 1 may be implemented as a set of video sub-segments corresponding to a plurality of roles a, or the role a corresponds to 1 or more video segments.
Table 1:
optionally, the cloud storage system further stores role relations among all roles in the target video, and draws role relation graphs corresponding to all roles in the target video based on the role relations among all roles.
And step 303, responding to the second role appearing in the playing picture of the first video clip, and displaying a second role control corresponding to the second role.
Optionally, the first character corresponds to at least one first video segment, the at least one first video segment is played in sequence, and in the process of playing the at least one video segment, the terminal performs character recognition on the current frame of the first video segment to obtain candidate characters appearing in the playing picture of the first video segment.
Optionally, a role relationship graph corresponding to the target video is obtained, where the role relationship graph is used to indicate an association relationship between roles appearing in the target video, and the following detailed description is given to a generation process of the role relationship graph, and specifically includes the following steps:
The method comprises the steps of obtaining image text data related to a target video, determining whether candidate characters corresponding to candidate character information appear in the target video or not based on the image text data, wherein the image text data are used for describing association relations among characters in the target video, the image text data comprise but not limited to at least one image data, at least one text data and at least one audio/video data, the image data can be a script, a post-scene flower, a propaganda poster and the like of the candidate characters playing the target video, the text data can be text content for analyzing a story line and/or a script of the target video, text content for recommending characters corresponding to each character in the target video, analyzed text content for decorating character objects in the target video, and the like, such as an article of Anli recommendation abc, an article of depth analysis abc of the relation among characters, an article of analysis abc of the story line of xx characters in abc, and the like, the audio/video data can be used for indicating that each character in the target video, the character corresponding to the story line in the target video, the character object in MP3 or the MP4 can be represented in a defined audio/video or other audio/video, or the MP4 can be represented in a defined form or not.
Optionally, extracting a role entity word corresponding to each role in the target video from the image text data, and generating a role relation map corresponding to the target video based on the association relation between each role described in the image text data; in the embodiment of the present application, the image text data carries the target video, each character in the target video, and the label corresponding to the candidate character, and the image text data with the same/overlapping labels are associated with each other, referring specifically to fig. 4, when the first video segment a corresponding to the first character 101 is played, when the second character 102 appears in the playing frame, the terminal performs face recognition on the current frame to obtain the candidate character "li" of the candidate character 102, obtains the image text data carrying the label of the candidate character "li" and the label corresponding to the target video, determines that the candidate character "li" is the character playing the second character 102 based on the image text data, and it should be noted that, in fig. 4, the candidate character 102 is the object playing the second character 102, so the reference numeral 102 may refer to the candidate character or the second character.
Optionally, a second character appearing in the playing picture of the first video clip is determined based on the candidate character and the character relation map, and in the embodiment of the application, a character association relation exists between the second character and the first character.
In an alternative embodiment, the second role determination includes, but is not limited to, the following:
Firstly, responding to the occurrence of a candidate role in a first video segment, intercepting a screenshot picture corresponding to the candidate role, acquiring image data corresponding to a target video, comparing the screenshot picture with the image data of a target video object, determining whether the screenshot picture is in the image data corresponding to the target video, and determining the candidate role as a second role according to a role relation image if the screenshot picture is in the image data corresponding to the target video.
Secondly, responding to the occurrence of the candidate role in the first video segment, carrying out face recognition on the current frame of the first video segment, determining a person object corresponding to the candidate role, wherein the person object comprises an object name corresponding to the candidate role, determining whether the person object plays the target video, in other words, determining whether the person object corresponding to the candidate role belongs to the role in the target video, and if the person object plays the target video, determining the candidate role as a second role.
In the embodiment of the present application, the manner of determining at least one second video clip corresponding to the second role includes, but is not limited to, the following manners:
The method comprises the steps of firstly, identifying a first candidate video segment with an incidence relation with a second role from a preset video library, wherein the incidence relation is used for indicating the relation between the candidate video segment and the second role, the incidence relation can be that the first candidate video segment contains the second role, namely, the second video segment is a video segment with the second role, or a forego prompt segment leading out the second role, namely, the second video segment plays a role of guiding the appearance of the second role, or a video segment with common events of other roles and the second role, namely, the second video segment does not have the second role, but the other roles which appear are executed with the second role, the first candidate video segment and a target video are matched to obtain at least one second video segment corresponding to the target video, the specific process of video matching comprises the steps of comparing the first candidate video segment with the target video to conduct duplication processing, determining the segment coincident with the first candidate video segment from the target video, and taking the segment as at least one second video segment, wherein all the video contained in the preset video library can carry the second candidate video, the second candidate video can be compared with the target video, the label of the tag of the target video can be extracted from the first candidate video segment, and the tag of the target video can be obtained by comparing the tag of the target video with the corresponding to the target video in the first candidate video segment, and the tag of the target video is determined.
And secondly, the terminal locally stores video clips related to each role in the target video in advance, and when a second role appears in a playing picture of the first video clip, the terminal directly obtains the second video clip related to the second role from the local.
Thirdly, packing and storing video clips related to each role in the target video in a remote storage system, updating the video clips related to each role in the target video stored in the remote storage system in real time by a server according to label information in the uploaded video, and sending a request for acquiring at least one second video clip corresponding to the second role to the server by the terminal when the second role appears in a playing picture of the terminal, and acquiring the at least one second video clip related to the second role in the target video from the remote storage system by the server according to the request.
Optionally, after determining at least one second video segment associated with the second character, displaying a second character control in a playing picture of the first video segment, where the second character control is used to play the second video segment associated with the second character in the target video.
In the embodiment of the application, the second video clip can be used for indicating the video clip containing the second character, can also be used for indicating the pro-file prompt clip leading out the second character, and can also be used for indicating the video clip with common stories (events) of other characters and the second character, namely, the second character does not necessarily appear in the second video clip at this time.
Step 304, in response to receiving the triggering operation of the second character control, playing at least one second video clip associated with the second character.
Optionally, the second character control may be understood as a shortcut entry for quickly switching from playing the video clip associated with the first character to playing the video clip associated with the second character, and a character association relationship exists between the second character and the first character.
Optionally, in response to receiving a triggering operation on the second character control, where the triggering operation includes, but is not limited to, a clicking operation, a long-press operation, a dragging operation, a triggering operation input through an external device, a voice control operation, and the like, at least one second video clip associated with the second character is acquired and automatically played. Referring specifically to fig. 4, fig. 4 is an interface schematic diagram of a video playing method according to an embodiment of the present application.
As shown in fig. 4, content introduction information corresponding to the target video is displayed in the terminal interface 10, where the content introduction information includes, but is not limited to, introduction content of the target video, a propaganda poster corresponding to the target video, a control for playing the target video, a role relationship map corresponding to the target video, and the like. In the embodiment of the application, the identity information (character name and/or character identity) corresponding to each character in the target video and the character association relation (line segment with arrow in fig. 4) between each character are displayed in the character relation map, and optionally, the character association description content between the characters can be displayed at the line segment with arrow, so that the character association description content is convenient for understanding the character relation between each character in the target video; the role relationship graph displays a role head control corresponding to a core role of a target video, wherein the core role is used for indicating an important role for promoting the development of a target video story line, fig. 4 shows that the target video contains 4 core roles, in response to receiving a triggering operation of the first role control 401, the currently displayed interface 40 is switched to a video playing interface 41 for playing a first video segment corresponding to the first role 401, at least one first video segment (first video segments a to c) arranged in a preset sequence is displayed in the playing interface 41, description content corresponding to the first video segment a being played is highlighted, when a second role appears in the playing interface 42 of the first video segment a, the terminal determines that the second role 102 is a role with a role association relationship with the first role 401, in the playing interface of the first video segment, the second role control 103 is displayed in an overlapping manner, in response to receiving the triggering operation of the second role control 403, role description information of the second role 402 is displayed and an entry control for viewing the video segment corresponding to the second role is displayed, the method comprises the steps of switching a first video clip a corresponding to a first role 401 played on a current interface 42 to a playing interface 43 for playing a second video clip b corresponding to a second role 402, displaying at least one second video clip (second video clips a to c) arranged according to a preset sequence in the playing interface 43, and highlighting the description content corresponding to the second video clip b being played, wherein the first video clip a and the second video clip b can be the same video clip or different video clips, and the method is not limited in this application.
In an optional embodiment, the second video segment is continuously played according to the playing content corresponding to the playing progress of the first video segment, where the playing progress is used to indicate the playing content indicated by the corresponding timestamp when the first video segment is switched, that is, the playing progress may be understood as the playing progress of the first video segment corresponding to the playing progress in the target video, for example, when the first video segment is played to a certain time (designated timestamp), the second role control displayed in the playing interface is triggered, the playing progress of the first video segment corresponding to the designated timestamp corresponding to the target video is obtained, the playing start timestamp of the second video segment is determined according to the designated timestamp, and the second video segment is played from the start timestamp. Optionally, the second video clip is continuously played from a timestamp corresponding to a preset time period before the playing progress of the first video clip, or the second video clip is continuously played from a timestamp corresponding to a preset time period after the playing progress, or the second video clip is continuously played from a timestamp corresponding to the playing progress, for example, the playing progress of the first video clip corresponding to the target video is 12:00, after the second video clip is switched to, the starting playing time (here, the starting playing time is the playing time in the target video) of the second video clip may be continuously played from a timestamp corresponding to 11:55, or may be directly continuously played from a timestamp corresponding to 12:00, or may be continuously played from a timestamp corresponding to 12:05, where the preset time period is 5 seconds, and the preset time may also be set according to actual requirements, which is only an exemplary example.
In an alternative embodiment, in response to receiving a trigger operation of the second character control, displaying a first story line list corresponding to the second character, where the first story line list includes at least one first story node associated with the second character, where an i-th first story node corresponds to an i-th second video segment, and i is a positive integer; in the embodiment of the application, at least one first story node is displayed in a first story context list according to a story line development sequence of a target video, node introduction contents are correspondingly displayed in the at least one first story node, the node introduction contents are used for summarizing contents of a second video segment corresponding to the first story node, for example, title information corresponding to the second video segment is displayed at the first story node, for example, after triggering a second role control, the first story context list is displayed on an interface (video 1:xx encounters xx, video 2:xx successfully obtains clues, video 3:xx and xx generates misunderstandings), three story nodes respectively correspond to video 1, video 2 and video 3 are displayed at the story node corresponding to the video segment, the node introduction contents corresponding to the video segment are xx, a user directly views the node introduction contents in the story context corresponding to roles, the story context corresponding to the roles is known rapidly, and the user understanding efficiency of understanding the story context corresponding to the whole story line development sequence of the target video is improved.
In the embodiment of the present application, the display order of at least one second video clip in the first story context list includes, but is not limited to, the following manner:
First, at least one first story node is displayed in a first story context list in accordance with the order of play of at least one second video clip in the target video, e.g., a second character corresponds to store three second video clips, which may be video clips from the same set and/or album of target videos, illustratively, second video clip 1 (story node 1) is a frame of the second character corresponding to 16 minutes 30 seconds to 19 minutes 30 seconds of the first quarter 2 set of the target video, second video clip 3 (story node 3) is a frame of the second character corresponding to 20 minutes 15 seconds to 45 minutes 00 seconds of the first quarter 2 set of the target video, second video clip 2 (story node 2) is a frame of the second character corresponding to 1 minute 15 seconds to 12 minutes 00 seconds of the second quarter 1 of the target video, first video clips 1 to 3 are assembled in temporal order of the target video, the order of the first video clip 1, second video clip 3 is a frame of the second video clip 3, and the second video clip 2 is displayed at the second story node or the second story node
Second, at least one first story node is displayed in the first story line list in the order of the target video's story line progression, i.e., in the order of the second character's story line progression, or
Third, at least one first story node is displayed in a first story context list according to a custom arrangement sequence, the custom arrangement sequence is used for indicating an arrangement sequence of an account personalized setting, when a user searches or plays a target video by a designated application program, the user logs in corresponding account information, the user can perform personalized setting on an aggregation rule of video segments corresponding to each role in the target video (that is, perform personalized setting on the arrangement sequence of the story nodes corresponding to each role), the personalized setting comprises arranging a plurality of video segments according to importance degrees of a story development sequence, arranging a plurality of video segments according to a funneling program of the video segments, arranging a plurality of video segments according to popularity degrees of the video segments, and the like, after receiving triggering operation of a diagonal control, based on the personalized setting, the plurality of video segments (story nodes) of the roles are arranged and aggregated, and the video segments (story nodes) after being arranged and aggregated are displayed on a current playing interface, that is, namely, the story context list corresponding to the roles is displayed.
In another alternative embodiment, in the interface for playing the second video segment corresponding to the second role, a story context list of the second video segment is displayed, at least one first story node corresponding to the second role is displayed in the story context list according to the display sequence, as shown in fig. 4, in the playing interface 43, a story context list corresponding to the second role is displayed, where the story context list includes the second video segments a-c, and in the play list, node introduction contents of the second video segment are displayed, where the node introduction contents may be a title summary corresponding to the second video segment, newly appearing role information, and so on.
In summary, in the video playing method provided by the embodiment of the application, when a first video segment corresponding to a first character is watched, by identifying a second character in the first video segment and providing an entry control for jumping to play the second video segment corresponding to the second character in a playing picture, a user can understand the character relationship among the characters in a target video quickly, the efficiency of understanding the scenario of the target video by the user is improved, and in addition, other content associated with the target video can be further expanded by automatically analyzing the character relationship among the characters.
In an alternative embodiment, referring to fig. 5 specifically, fig. 5 is a flowchart of a video playing method according to another exemplary embodiment of the present application, where the method is applied to a terminal for explanation, and the scheme includes:
Step 501, a first character control is displayed.
In the embodiment of the application, the first character control is used for playing a first video segment associated with a first character in the target video, and the first video segment can be used for indicating the video segment containing the first character, can also be used for indicating a pro-file prompt segment leading out the first character, and can also be used for indicating the video segment with a common story (event) of other characters and the first character, namely, the first character cannot appear in the first video segment at the moment.
Optionally, the method of displaying the first character control includes, but is not limited to, the following:
Firstly, at least one role control corresponding to a target video is displayed on a video content introduction interface of the target video, wherein the role control is used for playing video clips corresponding to at least one role contained in the target video, the role control comprises the first role control, in an optional embodiment, the video content introduction interface is an interface obtained by searching the target video through a designated application program, and the designated application program can be a browser application program, a video playing application program, a live broadcast application program, a cloud disk storage application program and the like.
In another optional embodiment, the video content introduction interface may also be an introduction interface of an image corresponding to the target video, where the image may be a scenario, a post-scene flower, a propaganda poster, a hand-painted work, etc. of the target video, and the terminal identifies the target video corresponding to the image by using an image identification technology, and displays an entry control for entering the playing target video in the introduction interface.
As shown in fig. 4, a "three-country meaning" is input in a search area 601 of a browser display interface 600 to search, a video introduction area 602 corresponding to the "three-country meaning" and a role relationship display area 606 are displayed in the browser display interface 600, the video introduction area 602 includes a scenario introduction area 603 of the "three-country meaning", a poster display area 604, and an entry control 605 for playing the "three-country meaning", and a user can know about the general scenario of the "three-country meaning" by looking at the related text description in the scenario introduction area 603, and can trigger the entry control 605, and in response to receiving a trigger operation on the entry control 605, the currently displayed browser display interface 600 is switched to a playing interface for playing the three-country meaning. The role relation display area 606 is displayed with a role control corresponding to the role of the kernel in the three-country meaning, the role relation display area 606 is also displayed with introduction information of the roles and association information between the roles, the introduction information comprises names of the roles and identities of the roles, the association information is used for indicating association relations existing between the roles, and a user can trigger a first role control 607 in the role relation display area 606 to watch a first video clip corresponding to the first role.
And responding to the receiving of the triggering operation of the diagonal viewing control, and displaying a floating layer window containing at least one role control in the target video on the video playing interface of the target video in a superposition way, wherein the transparency of the floating layer window can be set according to the actual requirement of the user. As shown in fig. 7, a character view control (not shown in fig. 7) in the video playing interface is triggered, a floating layer window 701 is displayed in a superimposed manner in the video playing interface 700 of the target video, at least one character control is displayed in the floating layer window 701, description information of all characters or description information of part of characters corresponding to the target video is displayed in the floating layer window 701, the description information includes head image information of the characters, name information of the characters and identity information of the characters, a first character control 702 is displayed in the floating layer window 701, and the first character control 702 includes head image information and name information corresponding to a first character "character a" in an exemplary manner.
Step 502, in response to receiving a triggering operation on a first character control, acquiring a first video segment associated with a first character.
In an embodiment of the present application, the manner of determining at least one first video clip associated with a first character includes, but is not limited to, the following manners:
The method comprises the steps of firstly, identifying a second candidate video segment with an incidence relation with a first character from a preset video library, wherein the incidence relation is used for indicating the relation between the first candidate video segment and the first character, the incidence relation can be that the candidate video segment comprises a first character, namely, the first video segment is a video segment with the first character, or a first condition prompt segment leading out the first character, namely, the first video segment plays a role of guiding the appearance of the first character, or a video segment with a common event between other characters and the first character, namely, the first video segment does not appear in the first video segment, and other characters and the first character appear in the first video segment do common events, matching the second candidate video segment with a target video to obtain at least one first video segment corresponding to the target video, the specific process of video matching comprises the steps of comparing the second candidate video segment with the target video to remove the second candidate video segment, determining the segment coincident with the second candidate video segment from the target video as at least one first video segment, wherein the preset video library comprises the first candidate video segment carrying the first character and the second candidate character, and the target character is carried by the label, and the label tag of the label of the target video is determined by the label of the target video tag of the target video is extracted from the first candidate video segment. Optionally, the preset video library is a video library of a preset short video program, that is, the video library of the preset short video program includes a short video disclosed on a short video platform of the preset short video program, and the short video is attached with a video tag.
And secondly, the terminal locally stores video clips related to each role in the target video in advance, and the terminal directly acquires at least one first video clip related to the first role from the local after receiving the triggering operation of the first role control.
Thirdly, packing and storing video fragments associated with each role in the target video in a remote storage system, updating the video fragments associated with each role in the target video stored in the remote storage system in real time by a server according to label information in the uploaded video, and sending a request for acquiring at least one first video fragment associated with the first role to the server by the terminal after the terminal receives triggering operation on the first role control, wherein the server acquires the at least one first video fragment associated with the first role in the target video from the remote storage system according to the request.
In the embodiment of the application, in response to receiving the triggering operation of the first character control, a second story line list corresponding to a second character is displayed, wherein the second story line list comprises at least one second story node associated with the first character, the kth second story node corresponds to the kth first video segment, and k is a positive integer.
In the embodiment of the application, at least one second story node is displayed in the second story line development sequence of the target video, and node introduction content is correspondingly displayed on at least one second story node, wherein the node introduction content is used for summarizing the content of the first video segment corresponding to the second story node, for example, title information corresponding to the first video segment, role information summary appearing in the first video segment and the like are displayed at the first story node.
In the embodiment of the present application, the display order of at least one first video clip in the second story context list includes, but is not limited to, the following manner:
first, at least one second story node is displayed in a second story context list in accordance with the order of play of at least one first video segment in the target video, or
Second, at least one second story node is displayed in a second story line list in the order of the story line development of the target video, i.e., sequentially played in the order of the story line development of the first character, or
Third, at least one first video segment is sequentially played by at least one second story node in the second story context list according to a custom arrangement sequence, the custom arrangement sequence is used for indicating an arrangement sequence of an account personalized setting, when a user searches or plays a target video by a designated application program, corresponding account information is logged in, the user can perform personalized setting on an aggregation rule of video segments corresponding to each role in the target video (i.e., perform personalized setting on an arrangement sequence of story nodes corresponding to each role), the personalized setting comprises arranging a plurality of video segments according to importance degrees of a story development sequence, arranging a plurality of video segments according to a fuzzing program of the video segments, arranging a plurality of video segments according to popularity degrees of the video segments, and the like, after receiving triggering operation of a diagonal control, based on the personalized setting, the plurality of video segments (story nodes) of the roles are arranged and aggregated, and a set of the video segments (story nodes) after being arranged and aggregated is displayed on a current playing interface, that is, the story context list corresponding to the role is displayed.
In another alternative embodiment, in an interface for playing the first video clips corresponding to the first character, a play list of the first video clips is displayed, in which the introduction content of the first video clips is displayed in the play order of at least one first video clip, as shown in fig. 4, in the play interface 41, a story context list corresponding to the first character is displayed, where the story context list includes the first video clips a-c, and in the story context list, the introduction content of the first video clip is displayed, where the introduction content may be a title summary corresponding to the first video clip, newly appearing character information, and so on.
In the embodiment of the application, at least one first video clip in the second story vein list is sequentially played according to the sequence of the second story vein list, and optionally, a user can actively trigger a target story node in the second story vein list to play according to the watching requirement.
In step 503, in response to the second character appearing in the play frame of the first video clip, a second character control corresponding to the second character is displayed.
Optionally, in the process of playing at least one video clip, the terminal performs role recognition on the current frame of the first video clip to obtain candidate roles appearing in the playing picture of the first video clip.
Optionally, based on the role relation map corresponding to the candidate role and the target video, determining a second role appearing in the playing picture of the first video segment, and displaying a second role control corresponding to the second role, where the second role control can be understood as a shortcut entry for quickly switching from playing the video segment corresponding to the first role to playing the video segment corresponding to the second role, and a role association relation exists between the second role and the first role.
The specific implementation of this step is the same as that of step 303, and will not be described here again.
And step 504, in response to receiving the triggering operation of the second character control, displaying a character introduction interface.
Optionally, a relationship description content between the first character and the second character is also displayed in the second character control, where the relationship description content is used to indicate a character association relationship existing between the first character and the second character, and the second character is an university college of the first character, and after the terminal identifies the second character, the second character control is displayed in the interface, and the second character control is displayed with the university college of the third character, and clicks to view detailed introduction content, and optionally, the second character control is displayed with head portrait information corresponding to the third character.
Optionally, in response to receiving a triggering operation on the second role control, superposing and displaying a role introduction interface corresponding to the second role in the current playing picture, wherein the role introduction interface comprises introduction information of the second role, an entry control for viewing at least one second video clip corresponding to the second role and a relationship network of the first role in the target video, the relationship network comprises a relationship description between the first role and at least one role in the target video, and optionally, the relationship description between the first role and the second role is highlighted in the relationship network.
Optionally, the character introduction interface further includes a play control, where the play control is used to play at least one second video segment corresponding to the second character, and in response to receiving a trigger operation of the play control, the current interface is switched to a play interface of at least one second video segment corresponding to the second character.
In step 505, at least one second video clip associated with a second character is played in response to receiving a trigger operation of a play control in the character introduction information interface.
Optionally, at least one video clip corresponding to the second role is obtained and played.
Referring specifically to fig. 8, fig. 8 shows an interface schematic diagram of displaying relationship descriptions provided by an exemplary embodiment of the present application, a terminal plays a first video clip corresponding to a first character in a target video, where the first video clip includes a first character 801, in response to occurrence of a second character 802 in a play screen of the first video clip, a second character control 803 is displayed in a superimposed manner in a current play screen, where the second character control 803 displays relationship descriptions between the first character and the second character, for example, the second character is subordinate to the first character, and in response to receiving a trigger operation for the second character control, an introduction interface 804 is displayed in a superimposed manner in the current interface, where the introduction interface 804 includes introduction contents 805 of the second character, a play control 807 for playing the second video clip corresponding to the second character, and a relationship network 806 of the first character in the target video, where the relationship network 806 of the first character further includes a control 808 for viewing a complete role relationship of the target video, and optionally, in which the introduction interface 809 is displayed.
The specific implementation of this step is the same as that of step 304, and will not be described here again.
In summary, in the video playing method provided by the embodiment of the application, when a first video segment corresponding to a first character is watched, by identifying a second character in the first video segment and providing an entry control for jumping to play the second video segment corresponding to the second character in a playing picture, a user can understand the character relationship among the characters in a target video quickly, the efficiency of understanding the scenario of the target video by the user is improved, and in addition, other content associated with the target video can be further expanded by automatically analyzing the character relationship among the characters.
In this embodiment, video clips are aggregated based on a story development sequence corresponding to a role or a story development sequence of a target video, and the story development sequence between roles is linked through a role control for switching video clips displayed in a play picture, so that the efficiency of a user in understanding a target video scenario is improved.
In the embodiment, through superposing and displaying the role introduction interface corresponding to the second role on the play interface, a user can directly acquire the association relationship between the second role and the first role, the identity information corresponding to the second role and the relationship network of the first role in the target video from the role introduction interface, so that unnecessary searching steps are avoided from being executed by the user, the efficiency of acquiring the target video information by the user is improved, and controls for jumping to play the second video clip and checking the complete role relationship map are provided in the role introduction interface, so that the efficiency of understanding the target video story line by the user is further improved.
In an alternative embodiment, referring to fig. 9 specifically, fig. 9 is a flowchart of a video playing method according to another exemplary embodiment of the present application, where the method is applied to a terminal for explanation, and the method includes:
step 901, a first character control is displayed.
Optionally, the user selects a target video for watching, wherein the target video is used for indicating a video clip to be played, and the target video includes at least one of a short video, a movie episode, and a documentary, but is not limited to the short video.
Optionally, in the related interface for displaying the target video, a role relationship map corresponding to the target video is displayed, where the role relationship map is used to indicate a relationship between roles in the target video, and may also be used to indicate a relationship between principal angles in the target video, which is not limited in the present application.
Optionally, the role relationship map includes a role control corresponding to each role, where the role control is used to indicate role information of the role in the target video, including but not limited to, persona information for persona decoration, name of the role decorated in the target video, identity of the role decorated in the target video, and persona information dubbed for the role screen.
The execution of this step is the same as that of step 301, and will not be described here again.
In step 902, a first video clip associated with a first persona is acquired in response to receiving a trigger operation for the first persona control.
Alternatively, the first character control may be understood as a shortcut entry for playing a video clip corresponding to the first character.
Optionally, in response to receiving a triggering operation on the first character control, the triggering operation includes, but is not limited to, a clicking operation, a long-press operation, a dragging operation, a triggering operation input through an external device, and a voice control operation, and a first video clip corresponding to the first character is obtained.
The execution of this step is the same as that of step 302 and will not be described here again.
And step 903, in response to the second character appearing in the play screen of the first video clip, displaying a second character control corresponding to the second character.
Optionally, the first character corresponds to at least one first video segment, the at least one first video segment is played in sequence, and in the process of playing the at least one video segment, the terminal performs character recognition on the current frame of the first video segment to obtain candidate characters appearing in the playing picture of the first video segment.
Optionally, acquiring a role relation graph corresponding to the target video, wherein the role relation graph is used for indicating the association relation among the roles appearing in the target video, determining a second role appearing in a playing picture of the first video segment based on the candidate roles and the role relation graph, and displaying a second role control corresponding to the second role in the playing picture.
The execution of this step is the same as that of step 303, and will not be described here again.
Step 904, in response to receiving a trigger operation for the second character control, playing at least one second video clip associated with the second character.
Optionally, the second character control may be understood as a shortcut entry for quickly switching from playing the video clip associated with the first character to playing the video clip associated with the second character, and a character association relationship exists between the second character and the first character.
Optionally, in response to receiving a triggering operation on the second character control, where the triggering operation includes, but is not limited to, a clicking operation, a long-press operation, a dragging operation, a triggering operation input through an external device, a voice control operation, and the like, at least one second video clip corresponding to the second character is obtained and automatically played.
The execution of this step is the same as that of step 304, and will not be described here again.
In step 905, a return control is displayed in the play screen of the second video clip.
Optionally, a return control is displayed at a preset position of the playing picture, where the return control is used for indicating that the second video segment currently played is switched to the first video segment corresponding to the first character.
In some embodiments, when switching from playing the first video clip to playing the second video clip and from playing the second video clip to playing the first video clip, after the switching is successful, a prompt message is displayed in a superimposed manner in the playing interface, the prompt message is used for prompting that the video has been successfully switched, the display time of the prompt message reaches a preset time, and the display of the prompt message in the interface is canceled, or a closing control is provided in the prompt message, and the user actively cancels the display of the prompt message by triggering the closing control.
Optionally, in the process that the display time of the prompt information reaches the preset time, as the display time increases, the display transparency of the prompt information also becomes high.
Step 906, in response to receiving the trigger operation of the return control, switching the currently played second video clip to play the first video clip associated with the first character.
Optionally, after the second video clip is switched from playing to playing the first video clip, the playing rule of the first video clip includes, but is not limited to, the following ways:
firstly, switching the currently played second video clip to play the first video clip according to the aggregation sequence of the first video clips corresponding to the first character, that is, replaying at least one first video clip corresponding to the first character after receiving the triggering operation of the return control.
And secondly, switching the second video clip which is currently played to the first video clip corresponding to a historical playing time stamp, wherein the historical playing time stamp is used for indicating the time stamp of the first video clip which is played from the last time.
And switching and playing the first video segment according to the playing progress of the second video segment which is currently played, wherein the method comprises the steps of sequentially playing the next first video segment corresponding to the first video segment which is overlapped with the second video segment in response to the completion of the playing of the second video segment which is currently played, avoiding the user from repeatedly watching the same video segment before and after the video switching, improving the efficiency of watching the target video by the user, and switching and playing the first video segment according to the playing progress of the second video segment which is currently played in response to the incompletion of the second video segment which is currently played, so that the situation is smoothly linked before and after the video segments corresponding to different roles are switched, and improving the efficiency of understanding the content of the target video scenario by the user.
Referring specifically to fig. 10, fig. 10 shows an interface schematic diagram of switching from a second video clip to a first video clip in an embodiment of the present application, after a second video clip corresponding to a second role is successfully played in response to a trigger operation of a second role control, first prompt information 1002 is displayed in a playing screen, the first prompt information 1002 is used to prompt a user that video switching has been successfully completed (here, switching from the first video clip to the second video clip is successful), in an interface of at least one second video clip corresponding to the playing second role, a clip name corresponding to each second video clip and a clip source are also displayed, for example, a clip title "second role meets a first role" is displayed in the playing interface, and a target video 1 set for 06 minutes 30 seconds "is displayed in the second video clip corresponding to the second role, the user can autonomously select a second video clip corresponding to be viewed in the current interface, in response to a trigger operation of a return control 1001 is received, switching from the playing second video clip to the playing screen to the playing of the first video clip, and switching from the second video clip to the current video clip to the first video clip is successfully displayed in response to the second video clip, and the second video clip corresponding to the second role is successfully displayed in the current video clip, and the second video clip corresponding to the second role is successfully displayed in the second video clip is displayed in the current video clip corresponding to the second video clip is successfully displayed. It should be noted that, in fig. 10, the playback time of the original first video clip is returned when the first video clip is switched, which is merely an exemplary example, and the playback frames of the first video clip and the second video clip before and after the switching may be the same or different, which is not limited in the present application.
In the above embodiment, the second video clip may be in a playing state or a pause state during the process of switching from the first video clip to the second video clip, and the first video clip may be in a playing state or a pause state during the process of switching from the second video clip to the first video clip.
In summary, in the video playing method provided by the embodiment of the application, when a first video segment corresponding to a first character is watched, by identifying a second character in the first video segment and providing an entry control for jumping to play the second video segment corresponding to the second character in a playing picture, a user can understand the character relationship among the characters in a target video quickly, the efficiency of understanding the scenario of the target video by the user is improved, and in addition, other content associated with the target video can be further expanded by automatically analyzing the character relationship among the characters.
According to the embodiment of the application, video clips are aggregated based on the story development sequence corresponding to the roles or the story development sequence of the target video, and the story development sequence among the roles is linked through the role control for switching the video clips displayed in the play picture, so that the efficiency of understanding the target video scenario by the user is improved, and the efficiency of understanding the target video story line by the user is improved by providing the control for switching the video clips in the play picture, so that the user can freely switch the video clips corresponding to each role of the target video.
In an alternative embodiment, referring specifically to fig. 11, fig. 11 is a flowchart of video playing provided in another exemplary embodiment of the present application, where the method is applied to a terminal for explanation, and the method includes:
Step 1100, click on character a head portrait into character a storyline.
In some embodiments, a character relationship map is displayed in a content introduction interface or a playing interface of the target video, and a head portrait corresponding to at least one character in the target video is displayed in the character relationship map, so that the head portrait can be triggered to enter a story line corresponding to a specific role for watching.
Schematically, a head image corresponding to a character a is displayed in a character relationship map corresponding to a target video, a user clicks the head image of the character a to enter a story line corresponding to a viewing character a, and the story line corresponding to the character a is used for indicating story experiences corresponding to the character a or story experiences corresponding to the character a and other characters, for example, please refer to fig. 6, and the user clicks the head image 607 corresponding to the character a to enter the story line corresponding to the character a in an interface shown in fig. 6.
Step 1101, identifying at least one first video segment corresponding to character a in the target video.
In some embodiments, after receiving a trigger operation on the head portrait of the character a, the rear end obtains at least one video clip corresponding to the character a.
In other embodiments, candidate video clips carrying character a and/or target video tags are identified from a preset video library, comparison and de-duplication processing are performed on the candidate video clips and the target video, at least one first video clip corresponding to the character a is determined, at least one video clip is named in a manual labeling mode, that is, each first video clip corresponds to a corresponding title, and the identified at least one first video clip is fed back to the front end for display by the rear end.
In some embodiments, at least one first video clip is played according to the playing order of the target video, and a story line history corresponding to the character a is displayed in the designated area, where the story line history includes a shortcut entry of the at least one first video clip, a title name corresponding to the at least one first video clip, and a time out/source of the at least one video clip, and optionally, a presence of other characters is displayed on a peripheral side of the display area of the title name corresponding to the at least one first video clip, for example, a character B newly appears in the first video clip a, and a prompt message of "character B appears" is displayed on a peripheral side of the first video clip a.
Step 1102, automatically playing at least one first video clip according to the playing sequence of the target video.
In some embodiments, the first video segments are automatically played according to the playing order of the target video, or a story line list corresponding to the role a is displayed in the current interface, the user selects the target first video segments from the story line list to watch, and when one first video segment is watched, the next first video segment is automatically played according to the aggregation order of at least one first video segment.
Step 1103, view character a storyline.
In some embodiments, after clicking the head image of the character a to obtain the story line segment corresponding to the character a, at least one first video segment corresponding to the character a is automatically played, and the user may select a target first video segment from the at least one first video segment displayed in the interface for viewing.
And 1104, identifying the role B by comparing the current frame picture with the role relation map.
In some embodiments, in the first video segment corresponding to the playing role a, the terminal background automatically extracts the playing picture of the current frame for face recognition, compares the images according to the role relation of the target video, identifies the role B, and determines whether the role B has a direct association relationship with the role a, and the determination process is specifically referred to step 1105 and step 1106, which are not described in detail herein.
Step 1105, it is determined whether character B is the core character of the target video.
Optionally, the core role is used to indicate a primary role that pushes the target video storyline, e.g., principal man angle number one, principal man angle number two, principal woman angle, etc.
In some embodiments, determining whether the character B appears in the character relationship graph according to the face recognition result, if so, determining that the character B is a core character of the target video, and if not, continuing to identify the character appearing in the first video segment until the newly appearing character is the core character of the target video.
Step 1106, it is determined whether the number of second video clips corresponding to the character B exceeds a preset number.
Optionally, when the character B belongs to a core character of the target video, acquiring a second video segment corresponding to the character B;
When the number of the second video clips corresponding to the role B exceeds the preset number, the second video clips are aggregated, and the preset number can be set independently according to a user.
Step 1107, aggregating role B storylines corresponding to role B.
Optionally, according to the second video segments corresponding to the role B, manually labeling the story lines described by each second video segment, and aggregating the second video segments after the manual labeling into the role B story line corresponding to the role B.
Step 1108, displaying the head portrait corresponding to the role B, and showing the relationship description of the role B and the role A.
In some embodiments, after the aggregation of at least one second video clip corresponding to the character B is completed, displaying an avatar corresponding to the character B in a designated area in the playing interface of the first video clip, and displaying a description of the relationship between the character B and the character a.
Step 1109, click on character B header.
In some embodiments, the user clicks on the avatar corresponding to trigger character B, entering the interface of the storyline corresponding to viewing character B.
In other embodiments, in response to receiving a triggering operation of the character B avatar, a play time/play progress of the storyline segment corresponding to character a is reserved and stored.
Step 1110, click character B storyline.
In some embodiments, after clicking the head portrait of the character B, prompt information is displayed in the interface, wherein the prompt information is used for prompting a user whether to enter a story line of the character B, and in response to receiving the determining operation of the prompt information, the story line corresponding to the character B is displayed in the interface.
Step 1111, enter the character B storyline and play from the same moment as the character a storyline.
In response to receiving triggering operation on the character B story line, automatically playing pictures which are in the same segment and at the same moment as the character A story line in the character B story line, enabling the story line to be switched before and after, ensuring smoothness of the pictures, avoiding the situation that when different videos are switched, the pictures are greatly different before and after, and improving video playing efficiency.
Step 1112, view role B storyline.
In some embodiments, after the character B avatar is clicked to obtain the story line segment corresponding to the character B, the second video segment corresponding to the character B and ending playing the timestamp of the story line of the character a is automatically played.
In some embodiments, during the playing of the second video clip, the user may select a target second video clip from the displayed list of second videos for viewing.
Step 1113 clicks the return control in character B storyline to return to viewing character a.
In some embodiments, in the playing frame of the second video segment, a return control is displayed in a designated area of the interface, where the return control is used to instruct switching the storyline segment corresponding to the character B back to the storyline segment corresponding to the character a.
Step 1114, return to the original progress moment of the character a storyline original segment.
In some embodiments, in response to receiving a trigger operation on the return control, switching the storyline segment corresponding to the currently played character B back to the storyline segment corresponding to the character A, acquiring the playing time/playing progress of the storyline segment corresponding to the character A, and continuing to play the storyline segment of the character A according to the playing time/playing progress.
Step 1115, continue watching the video clip corresponding to the character a storyline.
In some embodiments, the user continues to view the plurality of first video clips included in the character a storyline.
In some optional embodiments, when other character avatars appear in the first video segment, the story line segments corresponding to other characters can be checked by clicking the other character avatars, and then free switching is realized by triggering a return control, so that a user can conveniently and quickly understand the relationship and the plot trend between the characters.
In summary, in the video playing method provided by the embodiment of the application, when a first video segment corresponding to a first character is watched, by identifying a second character in the first video segment and providing an entry control for jumping to play the second video segment corresponding to the second character in a playing picture, a user can understand the character relationship among the characters in a target video quickly, the efficiency of understanding the scenario of the target video by the user is improved, and in addition, other content associated with the target video can be further expanded by automatically analyzing the character relationship among the characters.
In the embodiment of the application, the video clips are aggregated based on the story development sequence corresponding to the roles or the story development sequence of the target video, and the story development sequence among the roles is linked through the role control for switching the video clips displayed in the playing picture, so that the efficiency of the user in understanding the target video scenario is improved.
According to the embodiment of the application, when a user watches the first video segment corresponding to the first character in the target video, the character relation among the characters appearing in the first video segment is intelligently identified, unimportant characters in the target video are filtered by setting the character identification condition, the computing resource of the terminal is effectively saved, and the entry control for jumping the video segments corresponding to different characters/different story lines is provided in the playing picture, so that free switching is realized, and the information acquisition efficiency when the user watches the target video is effectively improved.
Fig. 12 is a block diagram of a video playing device according to an exemplary embodiment of the present application, and as shown in fig. 12, the device includes a display module 1201 and a playing module 1202;
The display module 1201 is configured to display a first character control, where the first character control is configured to play a first video segment associated with a first character in a target video;
A playing module 1202, configured to play the first video clip associated with the first character in response to receiving a triggering operation on the first character control;
The display module 1201 is further configured to display a second role control corresponding to a second role in response to occurrence of the second role in a play screen of the first video segment, where a role association relationship exists between the second role and the first role, and the second role control is configured to play the second video segment associated with the second role in the target video;
The playing module 1202 is further configured to play at least one second video clip associated with the second character in response to receiving a trigger operation of the second character control.
In an alternative embodiment, as shown in fig. 13, the playing module 1202 is further configured to continue playing the second video clip according to the playing content corresponding to the playing progress of the first video clip, where the playing progress is used to indicate the playing content indicated by the corresponding timestamp when the first video clip is switched.
In an optional embodiment, the display module 1201 is further configured to display, in response to receiving a trigger operation of the second character control, a first story context list corresponding to the second character, where the first story context list includes at least one first story node associated with the second character, where the i-th first story node corresponds to the i-th second video segment, i is a positive integer, and the at least one first story node corresponds to a node introduction content displayed, where the node introduction content is used to summarize the second video segment associated with the first story node.
In an optional embodiment, the display module 1201 is further configured to display the at least one first story node in the first story line list according to a story line development order of the target video.
In an alternative embodiment, the display module 1201 is further configured to display the at least one first story node in the first story context list according to a playing order of the at least one second video segment in the target video.
In an alternative embodiment, the playing module 1202 is further configured to sequentially play the at least one second video clip according to an arrangement order of the at least one first story node in the first story context list.
In an alternative embodiment, as shown in fig. 13, the apparatus further includes an identification module 1203, an acquisition module 1204, and a determination module 1205;
the identifying module 1203 is configured to perform role identification on the play image to obtain candidate roles appearing in the play image;
the obtaining module 1204 is configured to obtain a role relationship graph corresponding to the target video, where the role relationship graph is used to indicate an association relationship between roles that occur in the target video;
A determining module 1205, configured to determine the second character appearing in the play screen based on the candidate character and the character relationship map, and display the second character control.
In an alternative embodiment, as shown in fig. 13, the determining module 1205 is further configured to determine the second character appearing in the play screen;
the identifying module 1203 is further configured to identify, from a preset video library, a first candidate video segment having an association relationship with the second character;
The determining module 1205 is further configured to match the first candidate video segment with the target video, and determine, from the target video, a segment that coincides with the first candidate video segment as the at least one second video segment.
In an optional embodiment, as shown in fig. 13, the display module 1201 is further configured to display, on a video content introduction interface of the target video, at least one role control corresponding to the target video, where the role control is used to play a video segment associated with at least one role included in the target video, and the role control includes the first role control, or superimpose, on a video play interface of the target video, a floating layer window including the at least one role control.
In an alternative embodiment, as shown in fig. 13, the identifying module 1203 is further configured to identify, from a preset video library, a second candidate video segment having an association relationship with the first character;
The determining module 1205 is further configured to match the second candidate video segment with the target video, and determine, from the target video, a segment that coincides with the second candidate video segment as the at least one first video segment. In an alternative embodiment, as shown in fig. 13, the display module 1201 is further configured to display, in response to receiving a trigger operation on the first character control, a second story context list corresponding to the first character, where the second story context list includes at least one second story node associated with the first character, where the nth second story node corresponds to the nth first video segment, n is a positive integer, and the at least one second story node corresponds to a node introduction content displayed, where the node introduction content is used to summarize the first video segment corresponding to the second story node.
In an alternative embodiment, as shown in fig. 13, the display module 1201 is further configured to display the at least one second story node in the second story line list according to a story line development order of the target video, or display the at least one second story node in the second story line list according to a play order of the at least one first video clip in the target video.
In an alternative embodiment, as shown in fig. 13, the playing module 1202 is further configured to sequentially play the at least one first video clip according to an arrangement order of the at least one second story interface in the second story context list.
In an optional embodiment, the display module 1201 is further configured to display a playlist of the at least one first video clip, where introduction contents of the first video clip are displayed in a play order of the at least one first video clip.
The display module 1201 is further configured to display a relationship description between the first character and the second character while displaying a second character control corresponding to the second character.
In an alternative embodiment, as shown in fig. 13, the display module 1201 is further configured to display a relationship network of the first character in the target video, where the relationship network includes a relationship description between the first character and at least one character in the target video, and highlight the relationship description between the first character and the second character in the relationship network.
In an alternative embodiment, as shown in fig. 13, the display module 1201 is further configured to display a return control during playing of the second video segment, where the return control is used to instruct to switch from playing the second video segment that is currently playing to playing the first video segment associated with the first character.
The playing module 1202 is further configured to switch, in response to receiving a trigger operation on the return control, a currently played second video clip to play the first video clip associated with the first character, where the switched first video clip is in a playing state or a pause state.
In an alternative embodiment, as shown in fig. 13, the playing module 1202 is further configured to switch the currently playing second video segment to play the first video segment according to the aggregation order of the first video segments associated with the first role, or switch the currently playing second video segment to the first video segment corresponding to a historical playing timestamp, where the historical playing timestamp is used to indicate a timestamp of playing the first video segment last time, or switch to play the first video segment according to a playing progress of the currently playing second video segment in response to the currently playing second video segment overlapping with the content of the first video.
In an alternative embodiment, as shown in fig. 13, the playing module 1202 is further configured to sequentially play a next first video clip corresponding to a first video clip overlapped with the second video clip in response to the playing of the currently played second video clip being completed;
and switching the playing and the first video clip according to the playing progress of the currently played second video clip in response to the currently played second video clip not being completely played.
In summary, in the video playing device provided by the embodiment of the application, when a first video segment associated with a first character is watched, the character relation between the first character and the second character is automatically analyzed by identifying the second character in the first video segment, and an entry control for jumping to play the second video segment associated with the second character is provided in a playing picture, so that a user can quickly understand the character relation between each character in a target video, the efficiency of understanding the scenario of the target video by the user is improved, in addition, by automatically analyzing the character relation between the characters, the video segments are aggregated based on the story development sequence corresponding to the characters or the story development sequence of the target video, and by linking the story development sequence between the characters by the character control for switching the video segments displayed in the playing picture, the efficiency of understanding the scenario of the target video by the user is improved, and other content associated with the target video can be further expanded.
It should be noted that, in the video playing device provided in the above embodiment, only the division of the above functional modules is used as an example, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the video playing device and the video playing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the video playing device and the video playing method are detailed in the method embodiments and are not repeated herein.
Fig. 14 is a schematic diagram showing a structure of a server according to an exemplary embodiment of the present application. The server may be the server 220 shown in fig. 2. Specifically, the present application relates to a method for manufacturing a semiconductor device.
The server 220 includes a central processing unit (CPU, central Processing Unit) 1401, a system Memory 1404 including a random access Memory (RAM, random Access Memory) 1402 and a Read Only Memory (ROM) 1403, and a system bus 1405 connecting the system Memory 1404 and the central processing unit 1401. Server 220 also includes a basic input/output system (I/O system, input Output System) 1406 to facilitate the transfer of information between various devices within the computer, and a mass storage device 1407 for storing an operating system 1413, application programs 1414, and other program modules 1415.
The basic input/output system 1406 includes a display 1408 for displaying information and an input device 1409, such as a mouse, keyboard, etc., for a user to input information. Wherein a display 1408 and an input device 1409 are connected to the central processing unit 1401 via an input output controller 1410 connected to the system bus 1405. The basic input/output system 1406 may also include an input/output controller 1410 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1410 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1407 is connected to the central processing unit 1401 through a mass storage controller (not shown) connected to the system bus 1405. Mass storage device 1407 and its associated computer-readable media provide non-volatile storage for server 220. That is, mass storage device 1407 may include a computer readable medium (not shown) such as a hard disk or compact disc read only memory (CD-ROM, compact Disc Read OnlyMemory) drive.
Computer readable media may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, erasable programmable read-only memory (EPROM, erasable Programmable Read Only Memory), electrically erasable programmable read-only memory (EEPROM, ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY), flash memory or other solid state memory device, CD-ROM, digital versatile disks (DVD, digital Versatile Disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that computer storage media are not limited to the ones described above. The system memory 1404 and mass storage device 1407 described above may be collectively referred to as memory.
According to various embodiments of the application, server 220 may also operate by being connected to remote computers on a network, such as the Internet. That is, the server 220 may be connected to the network 1412 through a network interface unit 1411 coupled to the system bus 1405, or the network interface unit 1411 may be used to connect to other types of networks or remote computer systems (not shown).
The memory also includes one or more programs, one or more programs stored in the memory and configured to be executed by the CPU. It will be appreciated by those skilled in the art that the structure shown in fig. 14 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The embodiment of the application also provides a computer device which can be implemented as a terminal as shown in fig. 2. The computer device includes a processor and a memory, where at least one instruction, at least one program, a code set, or an instruction set is stored in the memory, where at least one instruction, at least one program, a code set, or an instruction set is loaded and executed by the processor to implement the video playing method provided by the above method embodiments.
Embodiments of the present application further provide a computer readable storage medium having at least one instruction, at least one program, a code set, or an instruction set stored thereon, where the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the video playing method provided by the foregoing method embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the video playback method according to any one of the above embodiments.
Alternatively, the computer readable storage medium may include a Read Only Memory (ROM), a random access memory (RAM, random Access Memory), a Solid state disk (SSD, solid STATE DRIVES), an optical disk, or the like. The random access memory may include resistive random access memory (ReRAM, RESISTANCE RANDOM ACCESS MEMORY) and dynamic random access memory (DRAM, dynamic Random Access Memory), among others. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (20)

CN202411149517.7A2022-03-092022-03-09Video playing method, device, equipment, medium and productActiveCN118890513B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411149517.7ACN118890513B (en)2022-03-092022-03-09Video playing method, device, equipment, medium and product

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN202210225472.1ACN116781971B (en)2022-03-092022-03-09 Video playback method and device
CN202411149517.7ACN118890513B (en)2022-03-092022-03-09Video playing method, device, equipment, medium and product

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210225472.1ADivisionCN116781971B (en)2022-03-092022-03-09 Video playback method and device

Publications (2)

Publication NumberPublication Date
CN118890513A CN118890513A (en)2024-11-01
CN118890513Btrue CN118890513B (en)2025-09-16

Family

ID=88006818

Family Applications (3)

Application NumberTitlePriority DateFiling Date
CN202210225472.1AActiveCN116781971B (en)2022-03-092022-03-09 Video playback method and device
CN202411148546.1APendingCN118803350A (en)2022-03-092022-03-09 Video playback method, device, equipment, medium and product
CN202411149517.7AActiveCN118890513B (en)2022-03-092022-03-09Video playing method, device, equipment, medium and product

Family Applications Before (2)

Application NumberTitlePriority DateFiling Date
CN202210225472.1AActiveCN116781971B (en)2022-03-092022-03-09 Video playback method and device
CN202411148546.1APendingCN118803350A (en)2022-03-092022-03-09 Video playback method, device, equipment, medium and product

Country Status (1)

CountryLink
CN (3)CN116781971B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110225369A (en)*2019-07-162019-09-10百度在线网络技术(北京)有限公司Video selection playback method, device, equipment and readable storage medium storing program for executing
CN113891157A (en)*2021-11-112022-01-04百度在线网络技术(北京)有限公司Video playing method, video playing device, electronic equipment, storage medium and program product

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR100347710B1 (en)*1998-12-052002-10-25엘지전자주식회사Method and data structure for video browsing based on relation graph of characters
CN106021496A (en)*2016-05-192016-10-12海信集团有限公司Video search method and video search device
US11328012B2 (en)*2018-12-032022-05-10International Business Machines CorporationVisualization of dynamic relationships in a storyline
CN110337009A (en)*2019-07-012019-10-15百度在线网络技术(北京)有限公司Control method, device, equipment and the storage medium of video playing
CN111314784B (en)*2020-02-282021-08-31维沃移动通信有限公司 A video playback method and electronic device
JP7051941B6 (en)*2020-06-302022-05-06グリー株式会社 Terminal device control program, terminal device control method, terminal device, server device control method, method executed by one or more processors, and distribution system.

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110225369A (en)*2019-07-162019-09-10百度在线网络技术(北京)有限公司Video selection playback method, device, equipment and readable storage medium storing program for executing
CN113891157A (en)*2021-11-112022-01-04百度在线网络技术(北京)有限公司Video playing method, video playing device, electronic equipment, storage medium and program product

Also Published As

Publication numberPublication date
CN118890513A (en)2024-11-01
CN116781971A (en)2023-09-19
CN118803350A (en)2024-10-18
CN116781971B (en)2024-08-16

Similar Documents

PublicationPublication DateTitle
JP7123122B2 (en) Navigating Video Scenes Using Cognitive Insights
CN109803180B (en)Video preview generation method and device, computer equipment and storage medium
WO2021082668A1 (en)Bullet screen editing method, smart terminal, and storage medium
US10299010B2 (en)Method of displaying advertising during a video pause
US9342596B2 (en)System and method for generating media bookmarks
US9244923B2 (en)Hypervideo browsing using links generated based on user-specified content features
CN111368141A (en)Video tag expansion method and device, computer equipment and storage medium
JP2019185738A (en)System and method for associating textual summary with content media, program, and computer device
CN113824972A (en)Live video processing method, device and equipment and computer readable storage medium
US11582522B1 (en)Interactive entertainment content
CN114372172B (en)Method, device, computer equipment and storage medium for generating video cover image
CN111800668A (en)Bullet screen processing method, device, equipment and storage medium
CN116980718A (en)Scenario recomposition method and device for video, electronic equipment and storage medium
CN113992973A (en) Video summary generation method, device, electronic device and storage medium
KR101947553B1 (en)Apparatus and Method for video edit based on object
CN114339360A (en)Video processing method, related device and equipment
CN113688260B (en) Video recommendation method and device
CN113891157A (en)Video playing method, video playing device, electronic equipment, storage medium and program product
CN118890513B (en)Video playing method, device, equipment, medium and product
JP2021012466A (en) Metadata generation system, video content management system and programs
CN115052196A (en)Video processing method and related equipment
CN116980646A (en)Video data processing method, device, equipment and readable storage medium
US10990456B2 (en)Methods and systems for facilitating application programming interface communications
CN115134648B (en) A video playback method, device, equipment and computer readable storage medium
Nixon et al.Video Shot Discovery Through Text2Video Embeddings in a News Analytics Dashboard

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp