Movatterモバイル変換


[0]ホーム

URL:


CN114401425B - Video playing method and device - Google Patents

Video playing method and device
Download PDF

Info

Publication number
CN114401425B
CN114401425BCN202210118045.3ACN202210118045ACN114401425BCN 114401425 BCN114401425 BCN 114401425BCN 202210118045 ACN202210118045 ACN 202210118045ACN 114401425 BCN114401425 BCN 114401425B
Authority
CN
China
Prior art keywords
video
facial feature
feature parameter
parameter value
portrait
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210118045.3A
Other languages
Chinese (zh)
Other versions
CN114401425A (en
Inventor
喻昱
聂清阳
黄赞群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Original Assignee
Hunan MgtvCom Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan MgtvCom Interactive Entertainment Media Co LtdfiledCriticalHunan MgtvCom Interactive Entertainment Media Co Ltd
Priority to CN202210118045.3ApriorityCriticalpatent/CN114401425B/en
Publication of CN114401425ApublicationCriticalpatent/CN114401425A/en
Application grantedgrantedCritical
Publication of CN114401425BpublicationCriticalpatent/CN114401425B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a video playing method and a video playing device, which are used for determining whether a target facial feature parameter value corresponding to a first portrait exists or not based on the acquired first portrait when a video application starting instruction uploaded by a user terminal is detected; if the target facial feature parameter value corresponding to the first portrait exists, searching video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model, wherein the pre-built memory model is obtained by processing based on a second portrait acquired last time by the user terminal when the user terminal detects the exit operation of the client exiting the video application last time; processing the video information and the video key frames, and playing the video related to the video information from the video key frames. In the scheme, manual searching is not needed, so that the episode watched last time is immediately positioned through face recognition, videos are positioned in time, and the viscosity of users and video application can be improved.

Description

Video playing method and device
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video playing method and apparatus.
Background
With the development of emerging media, the number and types of episodes that video applications can choose to view are rapidly increasing.
Currently, after each time a user views a video such as a television show/movie in a video application, the next time the user views the video again, he needs to search for the episode that was previously viewed by searching or looking up from a play history list. Because the name of the video is forgotten, the corresponding video needs to be determined by searching the lead actor scope, so that the searching time is overlong; and for older users, it is difficult to find the previous episode without understanding the search interaction or use of pinyin search. By the method, manual searching is needed, videos cannot be positioned timely, and the viscosity of users and video application can be improved.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a video playing method and apparatus, so as to solve the problems in the prior art that manual searching is required, video cannot be located in time, and the improvement of the viscosity of users and video applications is not facilitated.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
the first aspect of the embodiment of the invention shows a video playing method, which is applied to a server, and comprises the following steps:
when a video application starting instruction uploaded by a user terminal is detected, acquiring a first portrait currently acquired by the user terminal through a camera;
determining, based on the first person, whether there is a target facial feature parameter value corresponding to the first person;
If the target facial feature parameter value corresponding to the first portrait exists, searching video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model, wherein the pre-built memory model is obtained by processing based on a second portrait acquired by the user terminal last time when the user terminal last detects the exit operation of the client exiting the video application;
And processing the video information and the video key frames, and playing the video related to the video information from the video key frames.
Optionally, when the user terminal detects that the client exits from the video application last time, a process of processing the obtained memory model based on the second portrait acquired last time by the user terminal includes:
When the user terminal detects the exit operation of the client exiting the video application, acquiring a second portrait acquired by the user terminal through a camera, and executing video information and video key frames of the video played before the exit operation;
analyzing the second image, and determining a second facial feature parameter value corresponding to the second image;
Establishing an association between the second facial feature parameter value and video information and video key frames of the video played before the exit operation;
And constructing a memory model based on the association between the second facial feature parameter value and the video information and video key frames of the video played before the exit operation.
Optionally, before analyzing the second image and determining the second facial feature parameter value corresponding to the second image, the method further includes:
Judging whether the second portrait is a pre-registered portrait or not;
and if the second image is a pre-registered image, executing the step of analyzing the second image and determining a second facial feature parameter value corresponding to the second image.
Optionally, the determining, based on the first person image, a target facial feature parameter value corresponding to the first person image includes:
analyzing the first portrait, and determining a first facial feature parameter value corresponding to the first portrait;
Traversing a facial feature storage list, and searching whether a second facial feature parameter value with the similarity with the first facial feature parameter value being larger than a preset similarity exists in the facial feature storage list;
And if the facial feature parameter value exists, taking the second facial feature parameter value with the similarity with the first facial feature parameter value being larger than the preset similarity as a target facial feature parameter value.
Optionally, the processing the video information and the video key frame and playing the video related to the video information from the video key frame includes:
Generating an indication interface based on the video information and the video key frames, and sending the indication interface to the user terminal for display by the user terminal;
And when the determined playing of the client based on the indication interface input of the user terminal is received, playing the video related to the video information from the video key frame based on the video information and the video key frame.
A second aspect of an embodiment of the present invention shows a video playing device, including:
the acquisition module is used for acquiring a first image acquired by the user terminal through the camera when detecting a video application starting instruction uploaded by the user terminal;
A person image recognition module, configured to determine, based on the first person image, whether a target facial feature parameter value corresponding to the first person image exists;
The searching module is used for searching video information and video key frames corresponding to the target facial feature parameter values in a pre-built memory model if the target facial feature parameter values corresponding to the first portrait exist, wherein the pre-built memory model is built based on a memory portrait module and a portrait association video module;
And the processing module is used for processing the video information and the video key frames and playing the video related to the video information from the video key frames.
Optionally, the portrait memorizing module is specifically configured to:
When the user terminal detects the exit operation of the client exiting the video application, acquiring a second portrait acquired by the user terminal through a camera, and executing video information and video key frames of the video played before the exit operation; analyzing the second image, and determining a second facial feature parameter value corresponding to the second image;
Correspondingly, the portrait association video module is specifically configured to: establishing an association between the second facial feature parameter value and video information and video key frames of the video played before the exit operation; and constructing a memory model based on the association between the second facial feature parameter value and the video information and video key frames of the video played before the exit operation.
Optionally, the portrait identification module is specifically configured to:
Analyzing the first portrait, and determining a first facial feature parameter value corresponding to the first portrait; traversing a facial feature storage list, and searching whether a second facial feature parameter value with the similarity with the first facial feature parameter value being larger than a preset similarity exists in the facial feature storage list; and if the facial feature parameter value exists, taking the second facial feature parameter value with the similarity with the first facial feature parameter value being larger than the preset similarity as a target facial feature parameter value.
A third aspect of the embodiment of the present invention shows an electronic device, where the electronic device is configured to execute a program, where the program executes the video playing method according to the first aspect of the embodiment of the present invention.
A fourth aspect of the embodiment of the present invention shows a computer storage medium, where the storage medium includes a storage program, where when the program runs, the device where the storage medium is controlled to execute the video playing method as shown in the first aspect of the embodiment of the present invention.
The video playing method and device provided by the embodiment of the invention are applied to the server, and the method comprises the following steps: when a video application starting instruction uploaded by a user terminal is detected, acquiring a first portrait currently acquired by the user terminal through a camera; determining, based on the first person, whether there is a target facial feature parameter value corresponding to the first person; if the target facial feature parameter value corresponding to the first portrait exists, searching video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model, wherein the pre-built memory model is obtained by processing based on a second portrait acquired by the user terminal last time when the user terminal last detects the exit operation of the client exiting the video application; and processing the video information and the video key frames, and playing the video related to the video information from the video key frames. In the embodiment of the invention, manual searching is not needed, when a video application starting instruction uploaded by a user terminal is detected, whether a target facial feature parameter value corresponding to the first portrait exists or not is determined through the collected first portrait, video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model are searched, so that the last episode watched is immediately positioned through face recognition, videos are positioned in time, and the viscosity of the user and the video application can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application architecture diagram of a user terminal and a server according to an embodiment of the present invention;
fig. 2 is a flow chart of a video playing method according to an embodiment of the present invention;
fig. 3 is a flowchart of another video playing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video playing device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the description of "first", "second", etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implying an indication of the number of technical features being indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
In the present disclosure, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the embodiment of the invention, manual searching is not needed, when a video application starting instruction uploaded by a user terminal is detected, whether a target facial feature parameter value corresponding to the first portrait exists or not is determined through the collected first portrait, video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model are searched, so that the last episode watched is immediately positioned through face recognition, videos are positioned in time, and the viscosity of the user and the video application can be improved.
Fig. 1 shows an application architecture diagram of a user terminal and a server according to an embodiment of the present invention.
The number of the user terminals 10 is plural, specifically including the user terminals 11, 12, and the user terminals 1N are N in total, and the value of N is a positive integer of 1 or more.
The user terminal 10 is connected with the server 20 by wireless.
The processing procedure for video playing is realized based on the application architecture, and the processing procedure comprises the following steps:
When detecting a video application opening instruction, the user terminal 10 uploads the video application opening instruction to the server 20, and when receiving a shooting instruction sent by the server 20, opens a camera of the user terminal 10 to shoot a client through the camera of the user terminal 10, so as to ensure that a face, namely a first portrait, can be correctly recorded, and the first portrait is uploaded to the server 20.
It should be noted that, the user terminal 10 may be a handheld device or a portable device, such as a mobile phone, a tablet device, a mobile phone terminal, a television, or the like.
The user terminal 10 may provide support for shooting hardware, i.e. its own camera, for taking dynamic video and still pictures.
The server 20 is configured to obtain a first portrait uploaded by the user terminal; determining, based on the first person, whether there is a target facial feature parameter value corresponding to the first person; if the target facial feature parameter value corresponding to the first portrait exists, searching video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model, wherein the pre-built memory model is obtained by processing based on a second portrait acquired by the user terminal last time when the user terminal last detects the exit operation of the client exiting the video application; and processing the video information and the video key frames, and playing the video related to the video information from the video key frames.
In the embodiment of the invention, manual searching is not needed, when a video application starting instruction uploaded by a user terminal is detected, whether a target facial feature parameter value corresponding to the first portrait exists or not is determined through the collected first portrait, video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model are searched, so that the last episode watched is immediately positioned through face recognition, videos are positioned in time, and the viscosity of the user and the video application can be improved.
Based on the application architecture shown in the above embodiment of the present invention, referring to fig. 2, a flow chart of a video playing method shown in the embodiment of the present invention is shown, where the method is applicable to a server, and the method includes:
Step S201: when a video application starting instruction uploaded by a user terminal is detected, a first image currently acquired by the user terminal through a camera is acquired.
Optionally, when detecting a video application opening instruction, the user terminal uploads the video application opening instruction to the server, and when receiving a shooting instruction sent by the server, opens the camera of the user terminal to shoot a client, namely a first portrait, through the camera of the user terminal, and uploads the first portrait to the server.
It should be noted that the first portrait is a complete face shown by dynamic video and still pictures.
In the specific implementation process of step S201, when a video application start instruction uploaded by a user terminal is detected, a shooting instruction is sent to the user terminal based on the video application start instruction, so that the user terminal starts a camera of the user terminal, a client, namely a first portrait, is shot by the camera of the user terminal, and the first portrait is uploaded to a server. The server acquires a first image currently acquired by the user terminal through the camera.
Step S202: based on the first portrait, determining whether a target facial feature parameter value corresponding to the first portrait exists, if so, executing step S203, and if not, indicating that a user corresponding to the user terminal opens the video application for the first time.
In the step S202, the process of determining the target facial feature parameter value corresponding to the first portrait based on the first portrait includes the following steps:
step S11: and analyzing the first human image to determine a first facial feature parameter value corresponding to the first human image.
In the specific implementation process of step S11, the first person image is analyzed, and a set of parameter values, namely, first facial feature parameter values, are formed by extracting feature values of a face frame, namely, a face contour and facial features.
Optionally, after the first facial feature parameter value is obtained, it is recorded and labeled R.
Wherein R comprises R1 and R2 … Rn, and n is a positive integer greater than 0.
Step S12: and traversing a facial feature storage list, searching whether a second facial feature parameter value with the similarity larger than the preset similarity exists in the facial feature storage list, if so, executing step S13, and if not, indicating that the user corresponding to the user terminal starts the video application for the first time.
In the specific implementation step S12, traversing all the second facial feature parameter values in the facial feature storage list, calculating the similarity between the first facial feature parameter value and each second facial feature parameter value, determining whether there is a second facial feature parameter value with the similarity greater than the preset similarity, if so, executing step S13, and if not, indicating that the user corresponding to the user terminal opens the video application for the first time.
It should be noted that the preset similarity is preset by a technician, for example, may be set to 95%.
The facial feature storage list is used for storing second facial feature parameter values corresponding to second human images acquired by the corresponding user terminals when the user terminals exit from the video application in advance.
The similarity between the first facial feature parameter value and each second facial feature parameter value refers to the ratio of the number of the same parameter values existing in the first facial feature parameter value and each second facial feature parameter value to the total number of the first facial feature parameter values or the total number of the second facial feature parameter values.
Step S13: and taking the second facial feature parameter value with the similarity larger than the preset similarity with the first facial feature parameter value as a target facial feature parameter value.
In the specific implementation process of step S13, a second facial feature parameter value having a similarity greater than a preset similarity with the first facial feature parameter value is directly used as a target facial feature parameter value, and step S203 is executed.
Optionally, if the number of the second facial feature parameter values with the similarity greater than the preset similarity is determined to be more than one, the second facial feature parameter value with the highest similarity is used as the target facial feature parameter value.
Step S203: and searching video information and video key frames corresponding to the target facial feature parameter values in a pre-constructed memory model.
In step S203, the pre-built memory model is obtained by processing based on the second portrait acquired last time by the user terminal when the user terminal detects the logout operation of the client logout video application last time.
It should be noted that, the pre-constructed memory model is used for storing the corresponding relation between the facial feature parameter values and the video information and video key frames.
In the specific implementation process of step S203, a pre-built memory model is traversed, and video information and video key frames corresponding to the target facial feature parameter values are searched.
Step S204: and processing the video information and the video key frames, and playing the video related to the video information from the video key frames.
It should be noted that, in the process of executing step S204 to process the video information and the video key frame and playing the video related to the video information from the video key frame, the method includes the following steps:
step S21: and generating an indication interface based on the video information and the video key frames, and sending the indication interface to the user terminal for display by the user terminal.
In the specific implementation step S21, the video information and the video key frames are packaged to generate an indication interface for indicating the selection of the client, and the indication interface is sent to the user terminal so that the user terminal can display the video, and the client can select whether to play the video from the video key frames or not through the user terminal.
Step S22: and when the determined playing of the client based on the indication interface input of the user terminal is received, playing the video related to the video information from the video key frame based on the video information and the video key frame.
In the specific implementation process of step S22, if the determined playing based on the instruction interface input of the user terminal is received, the user terminal is controlled to play the video related to the video information from the video key frame.
In the embodiment of the invention, manual searching is not needed, when a video application starting instruction uploaded by a user terminal is detected, whether a target facial feature parameter value corresponding to the first portrait exists or not is determined through the collected first portrait, video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model are searched, so that the last episode watched is immediately positioned through face recognition, videos are positioned in time, and the viscosity of the user and the video application can be improved.
Based on the video playing method shown in the embodiment of the present invention, when the user terminal detects the logout operation of the client logout video application last time, the process of processing the memory model based on the second portrait collected last time by the user terminal is shown in fig. 3, and includes the following steps:
step S301: when the user terminal detects the exit operation of the client exiting the video application, the second portrait acquired by the user terminal through the camera is acquired, and video information and video key frames of the video played before the exit operation is executed.
Optionally, when detecting the exit operation of exiting the video application, the user terminal starts the camera of the user terminal to shoot the head portrait of the client, namely the second portrait, through the camera of the user terminal, and packages and uploads the second portrait and the video information and the video key frame of the video played before the user performs the exit operation, which are recorded at the moment, to the server.
It should be noted that the second portrait is also a complete face shown by dynamic video and still pictures.
In the specific implementation process of step S301, when the user terminal detects the exit operation of the client exiting the video application, the second image acquired by the user terminal through the camera and the video information and the video key frame of the video played by the user before executing the exit operation are acquired.
Step S302: judging whether the second portrait is a pre-registered portrait or not; if the second portrait is a pre-registered portrait, step S303 is executed, and if the second portrait is a non-pre-registered portrait, a registration instruction is sent to the client through the client, so that the client registers based on the registration instruction, and step S301 is executed again.
In the specific implementation of step S302, in order to determine whether the client is a registered user, it is determined whether there is a person consistent with the second person in the pre-registered person, if there is a person indicating that the second person is a pre-registered person, step S303 is executed, if the second person is a person not pre-registered, a registration instruction is sent to the client by the client, so that the client registers based on the registration instruction, and step S301 is executed again.
The pre-registered portraits refer to portraits registered by a user at or after registration, and are used for uploading at or after registration, wherein a non-member can upload only one portrait, and a member user can upload M portraits.
The value of M is a positive integer greater than or equal to 2, and M is set by a technician according to practical situations, for example, can be set to 3.
Step S303: and analyzing the second human image to determine a second facial feature parameter value corresponding to the second human image.
It should be noted that the implementation procedure of step S303 is the same as that of step S11, and can be seen from each other.
Optionally, the method further comprises: and storing the identification ids of the registered users corresponding to the different second facial feature parameter values into a memory model, so that the identification ids correspond to the second facial feature parameter values, and storing the parameter values R1 and R2 … Rn corresponding to all facial information of each figure in the memory model.
Step S304: and establishing an association between the second facial feature parameter value and video information and video key frames of the video played before the exit operation.
In the specific implementation process of step S304, a correspondence between the second facial feature parameter value and the video information and the video key frame of the video played before the exit operation is established.
Step S305: and constructing a memory model based on the association between the second facial feature parameter value and the video information and video key frames of the video played before the exit operation.
In the specific implementation of step S305, a memory model is constructed by associating each second facial feature parameter value with the video information of the video played before the exit operation and the video key frame, that is, the memory model stores all the second facial feature parameter values that have been entered, and the second facial feature parameter values correspond to the video key frame and the video information that were watched last time, such as R1 corresponds to the key frame a of the video 1.
It should be noted that, when the next user terminal detects a new exit operation of exiting the video application, the memory model is updated through the processes of step S301 to step S305 to cover the video information and the video key frame of the past video.
In order to better understand the contents shown in the above steps S301 to S305, the following description will be made with reference to an example.
When detecting the exit operation of exiting the video application, the user terminal Q starts a camera of the user terminal Q so as to shoot the head portrait of the client W, namely a second portrait, through the camera of the user terminal Q, and packages and uploads the second portrait and video information a and video key frame a1 of the video played before the user performs the exit operation, which are recorded at the moment, to the server.
If the second image is determined to be a pre-registered image, analyzing the first image, and extracting feature values of the facial frame and the five-sense organs to obtain corresponding second facial feature parameter values; and storing the identification id of the registered user corresponding to the second facial feature parameter value into the memory model, so that the identification id corresponds to the second facial feature parameter value R. Associating the second facial feature parameter value R with the video a key frame a 1; and updating the memory model with the association relationship so that the memory model stores the key frame a of the video 1 corresponding to the second facial feature parameter value R.
In the embodiment of the invention, when the user terminal detects the exit operation of the client exiting the video application, the second portrait acquired by the user terminal through the camera is acquired, and the video information and the video key frame of the video played before the exit operation is executed; and if the second image is a pre-registered image, analyzing the second image, and determining a second facial feature parameter value corresponding to the second image. And establishing an association between the second facial feature parameter value and video information and video key frames of the video played before the exit operation. And constructing a memory model based on the association between the second facial feature parameter value and the video information and video key frames of the video played before the exit operation. The video can be positioned in time by immediately positioning the last episode watched through face recognition, and the viscosity of the user and the video application can be improved.
Corresponding to the video playing method disclosed in the embodiment of the present invention, the embodiment of the present invention also correspondingly discloses a video playing device, as shown in fig. 4, where the video playing device includes:
and the acquisition module 401 is used for acquiring a first image acquired by the user terminal through the camera when the video application starting instruction uploaded by the user terminal is detected.
A person image recognition module 402, configured to determine, based on the first person image, whether a target facial feature parameter value corresponding to the first person image exists.
And the searching module 403 is configured to search, if it is determined that there is a target facial feature parameter value corresponding to the first portrait, video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model, where the pre-built memory model is constructed based on a memory portrait module and a portrait association video module.
And the processing module 404 is used for processing the video information and the video key frames and playing the video related to the video information from the video key frames.
It should be noted that, the specific principle and execution process of each unit in the video playing device disclosed in the above embodiment of the present invention are the same as those of the video playing method implemented in the above embodiment of the present invention, and reference may be made to corresponding parts in the video playing method disclosed in the above embodiment of the present invention, and no redundant description is given here.
In the embodiment of the invention, manual searching is not needed, when a video application starting instruction uploaded by a user terminal is detected, whether a target facial feature parameter value corresponding to the first portrait exists or not is determined through the collected first portrait, video information and video key frames corresponding to the target facial feature parameter value in a pre-built memory model are searched, so that the last episode watched is immediately positioned through face recognition, videos are positioned in time, and the viscosity of the user and the video application can be improved.
Based on the video playing device shown in the above embodiment of the present invention, the portrait memorizing module is specifically configured to:
When the user terminal detects the exit operation of the client exiting the video application, acquiring a second portrait acquired by the user terminal through a camera, and executing video information and video key frames of the video played before the exit operation; analyzing the second image, and determining a second facial feature parameter value corresponding to the second image;
Correspondingly, the portrait association video module is specifically configured to: establishing an association between the second facial feature parameter value and video information and video key frames of the video played before the exit operation; and constructing a memory model based on the association between the second facial feature parameter value and the video information and video key frames of the video played before the exit operation.
Based on the video playing device shown in the above embodiment of the present invention, the image recognition module 402 is specifically configured to:
Analyzing the first portrait, and determining a first facial feature parameter value corresponding to the first portrait; traversing a facial feature storage list, and searching whether a second facial feature parameter value with the similarity with the first facial feature parameter value being larger than a preset similarity exists in the facial feature storage list; and if the facial feature parameter value exists, taking the second facial feature parameter value with the similarity with the first facial feature parameter value being larger than the preset similarity as a target facial feature parameter value.
Based on the video playing device shown in the above embodiment of the present invention, the processing module 404 is specifically configured to:
Generating an indication interface based on the video information and the video key frames, and sending the indication interface to the user terminal for display by the user terminal; and when the determined playing of the client based on the indication interface input of the user terminal is received, playing the video related to the video information from the video key frame based on the video information and the video key frame.
The embodiment of the invention also discloses an electronic device which is used for running a database storage process, wherein the video playing method disclosed in the above figures 2 and 3 is executed when the database storage process is run.
The embodiment of the invention also discloses a computer storage medium, which comprises a storage database storage process, wherein the equipment where the storage medium is controlled to execute the video playing method disclosed in the above figures 2 and 3 when the database storage process runs.
In the context of this disclosure, a computer storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

CN202210118045.3A2022-02-082022-02-08Video playing method and deviceActiveCN114401425B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210118045.3ACN114401425B (en)2022-02-082022-02-08Video playing method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210118045.3ACN114401425B (en)2022-02-082022-02-08Video playing method and device

Publications (2)

Publication NumberPublication Date
CN114401425A CN114401425A (en)2022-04-26
CN114401425Btrue CN114401425B (en)2024-08-06

Family

ID=81232408

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210118045.3AActiveCN114401425B (en)2022-02-082022-02-08Video playing method and device

Country Status (1)

CountryLink
CN (1)CN114401425B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111698261A (en)*2020-06-232020-09-22杭州翔毅科技有限公司Video playing method, device, equipment and storage medium based on streaming media

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105868259A (en)*2015-12-292016-08-17乐视致新电子科技(天津)有限公司Video recommendation method and device based on face identification
CN110087131B (en)*2019-05-282021-08-24海信集团有限公司Television control method and main control terminal in television system
CN113992992B (en)*2021-10-252024-04-19深圳康佳电子科技有限公司Fragmentation film watching processing method and device based on face recognition and intelligent television

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111698261A (en)*2020-06-232020-09-22杭州翔毅科技有限公司Video playing method, device, equipment and storage medium based on streaming media

Also Published As

Publication numberPublication date
CN114401425A (en)2022-04-26

Similar Documents

PublicationPublication DateTitle
US9779775B2 (en)Automatic generation of compilation videos from an original video based on metadata associated with the original video
US8269857B2 (en)Image processing device, imaging apparatus, image-processing method, and program for face detection, discrimination and file entry correlation
US20160099023A1 (en)Automatic generation of compilation videos
US20160080835A1 (en)Synopsis video creation based on video metadata
US20160071549A1 (en)Synopsis video creation based on relevance score
US20130124551A1 (en)Obtaining keywords for searching
CN111046235A (en)Method, system, equipment and medium for searching acoustic image archive based on face recognition
CN112118395B (en)Video processing method, terminal and computer readable storage medium
CN108712667B (en)Smart television, screen capture application method and device thereof, and readable storage medium
US20050231602A1 (en)Providing a visual indication of the content of a video by analyzing a likely user intent
WO2016192501A1 (en)Video search method and apparatus
US20220405320A1 (en)System for creating an audio-visual recording of an event
CN108551587B (en)Method, device, computer equipment and medium for automatically collecting data of television
US20230072899A1 (en)Tagging an Image with Audio-Related Metadata
CN112328834A (en)Video association method and device, electronic equipment and storage medium
CN112329580A (en)Identity authentication method and device based on face recognition
JP6214762B2 (en) Image search system, search screen display method
US20220036081A1 (en)Method, identification device and non-transitory computer readable medium for multi-layer potential associates discovery
CN114401425B (en)Video playing method and device
CN111274449A (en)Video playing method and device, electronic equipment and storage medium
JP4773998B2 (en) Surveillance camera system, moving image search device, face image database update device, and operation control method thereof
JP2010233143A (en) Recording / playback apparatus and content search program
CN114143429A (en)Image shooting method, image shooting device, electronic equipment and computer readable storage medium
KR20130063742A (en)Smart home server managing system and method therefor
CN113537127A (en)Film matching method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp