Movatterモバイル変換


[0]ホーム

URL:


CN104951479A - Video content detecting method and device - Google Patents

Video content detecting method and device
Download PDF

Info

Publication number
CN104951479A
CN104951479ACN201410126946.2ACN201410126946ACN104951479ACN 104951479 ACN104951479 ACN 104951479ACN 201410126946 ACN201410126946 ACN 201410126946ACN 104951479 ACN104951479 ACN 104951479A
Authority
CN
China
Prior art keywords
user
video
current video
gather
behavioural information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410126946.2A
Other languages
Chinese (zh)
Inventor
王斌
郑志光
纪东方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi IncfiledCriticalXiaomi Inc
Priority to CN201410126946.2ApriorityCriticalpatent/CN104951479A/en
Publication of CN104951479ApublicationCriticalpatent/CN104951479A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

An embodiment of the invention relates to the technical field of video processing and discloses a video content detecting method and device. The video content detecting method includes: collecting user feedback behavior information according to video type of current video, wherein the user feedback behavior information includes feedback behavior conducted by a user when watching the current video; marking a video segment, corresponding to the user feedback behavior information, in the current video. The feedback behavior conducted by the user when watching video is taken as key information for recognizing position of the video segment where video core content is positioned; corresponding user feedback behavior is collected according to the video type; when the corresponding feedback behavior is collected, the video segment corresponding to the feedback behavior belongs to core content and is marked, in this way, the position of the video core content can be detected, improving of video using efficiency is facilitated, and time, for screening and browsing the video, of the user is saved.

Description

Video content detection method and device
Technical field
Disclosure embodiment relates to technical field of video processing, especially relates to video content detection method and device.
Background technology
Along with the fast development of Internet technology, video transmission is also more and more convenient, and video has become a main contents source of people's amusement, study.But, because video resource quantity is increasingly huge, of a great variety, subject matter is different, very different, and which video meets the demand of user actually, often needs to spend the more time to distinguish.If can by the core content of a video, or perhaps splendid contents, detect, also namely located, then can greatly reduce the time that user screens video, browsing video undoubtedly.But in correlation technique, still lack a kind of technical scheme that can realize that video core content is detected.
Summary of the invention
In view of this, the object of disclosure embodiment is to provide a kind of video content detection method and device, to realize locating the identification of video core content place video segment, improves the service efficiency of video.
In order to solve the problems of the technologies described above, disclosure embodiment discloses following technical scheme:
According to the first aspect of disclosure embodiment, provide a kind of video content detection method, described method comprises:
According to the video type of current video, gather user feedback behavioural information, wherein said user feedback behavioural information comprises the feedback behavior that user makes when watching current video;
The video segment corresponding in described current video to described user feedback behavioural information marks.
Optionally, described feedback behavior comprises:
The expression that the sound that user sends, user make and or the action made of user.
Optionally, the described video type according to current video, gathers user feedback behavioural information, comprising:
When described video type is comedy, gather the laugh of user and/or smile's expression of user;
When described video type is tragedy, gather the sobs of user and/or the sad expression of user;
When described video type be terrible acute time, gather the birdie of user, the startled expression of user and/or user block eye motion.
Optionally, the described video segment corresponding in described current video to described user feedback behavioural information marks, and comprising:
Record the start/stop time of described user feedback behavioural information video segment corresponding in described current video.
Optionally, described method also comprises:
Obtain the video type of current video.
Optionally, described method also comprises:
After the described video segment corresponding in described current video to described user feedback behavioural information marks:
According to the described video segment generating video recommendation information be labeled, to send to other users.
Optionally, described method also comprises:
Gather user's operation information, wherein said user's operation information comprises the operation that user does for described current video when watching current video;
According to the number of times operated in described user's operation information, for described current video is marked.
Optionally, described operation comprises:
To the click play operation of current video, and/or, to the drag operation of current video playing progress rate.
According to the second aspect of disclosure embodiment, provide a kind of video content sniffer, described device comprises:
User feedback behavior collecting unit, for the video type according to current video, gather user feedback behavioural information, wherein said user feedback behavioural information comprises the feedback behavior that user makes when watching current video;
Indexing unit, marks for the video segment corresponding in described current video to described user feedback behavioural information.
Optionally, described user feedback behavior collecting unit is used for:
When described video type is comedy, gather the laugh of user and/or smile's expression of user;
When described video type is tragedy, gather the sobs of user and/or the sad expression of user;
When described video type be terrible acute time, gather the birdie of user, the startled expression of user and/or user block eye motion.
Optionally, described indexing unit is used for:
Record the start/stop time of described user feedback behavioural information video segment corresponding in described current video.
Optionally, described device also comprises:
Type information acquiring unit, for obtaining the video type of current video.
Optionally, described device also comprises:
Recommendation information generation unit, for the described video segment generating video recommendation information that basis is labeled.
Optionally, described device also comprises:
User operation collecting unit, for gathering user's operation information, wherein said user's operation information comprises the operation that user does for described current video when watching current video;
Scoring unit, for according to the number of times operated in described user's operation information, for described current video is marked.
According to the third aspect of disclosure embodiment, provide a kind of video content sniffer, described device comprises processor and the storer for storage of processor executable instruction;
Described processor is configured to:
According to the video type of current video, gather user feedback behavioural information, wherein said user feedback behavioural information comprises the feedback behavior that user makes when watching current video;
The video segment corresponding in described current video to described user feedback behavioural information marks.
The technical scheme that disclosure embodiment provides can comprise following beneficial effect:
In the disclosed embodiments, the feedback behavior made when user being watched video is as the key message of the video segment position at identification video core content place, when implementing, according to the corresponding user feedback behavior of video type collection, when collecting corresponding feedback behavior, namely represent this video segment corresponding to feedback behavior and belong to core content, mark, so just detect the position at video core content place, contribute to the service efficiency improving video, and for video type, only gather corresponding user feedback behavior, avoid mark video being carried out to mistake, save user's screening, the time of browsing video.
Should be understood that, it is only exemplary that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in instructions and to form the part of this instructions, shows and meets embodiment of the present disclosure, and is used from instructions one and explains principle of the present disclosure.
Fig. 1 is the process flow diagram of a kind of video content detection method according to an exemplary embodiment;
Fig. 2 is the process flow diagram of a kind of video content detection method according to an exemplary embodiment;
Fig. 3 is the schematic diagram gathering user smile in an exemplary embodiment;
Fig. 4 is the schematic diagram gathering user's laugh in an exemplary embodiment;
Fig. 5 is the schematic diagram gathering user's face expression and sound in an exemplary embodiment;
Fig. 6 is the process flow diagram of a kind of video content detection method according to an exemplary embodiment;
Fig. 7 is the schematic diagram clicking play operation in an exemplary embodiment;
Fig. 8 is the schematic diagram of the drag operation of playing progress rate in an exemplary embodiment;
Fig. 9 is the schematic diagram of a kind of video content sniffer according to an exemplary embodiment;
Figure 10 is the schematic diagram of a kind of video content sniffer according to an exemplary embodiment;
Figure 11 is the schematic diagram of a kind of video content sniffer according to an exemplary embodiment;
Figure 12 is the schematic diagram of a kind of video content sniffer according to an exemplary embodiment;
Figure 13 is the block diagram of a kind of device for video content detection according to an exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
Fig. 1 is the process flow diagram of a kind of video content detection method according to an exemplary embodiment.The method can be used for terminal, such as mobile phone, panel computer, MP3 MP4 player, notebook computer, desktop computer etc.Shown in Figure 1, the method can comprise:
In step S101, according to the video type of current video, gather user feedback behavioural information, wherein said user feedback behavioural information comprises the feedback behavior that user makes when watching current video.
User, in viewing video process, automatic reaction can feed back some behavior in other words conj.or perhaps when seeing excellent part, by gathering user feedback behavioural information, the disclosure, namely based on this thinking, infers whether current video section belongs to splendid contents.
In the present embodiment or the disclosure some other embodiments, described feedback behavior comprises and can comprise: the action that the expression that the sound that user sends, user make and or user make.
The sound sent when any existing means can be used to watch video to user, the expression made and or action etc. gather.Such as, the sound (as laugh) of microphone+voice recognition software collection user can be used, gather the expression (as smile) of user with camera+facial recognition software, gather the action (as covering eyes) of user with camera+image recognition software, etc.In addition, correlation acquisition equipment can integrate with the terminal device playing current video, such as, directly utilize the front-facing camera on mobile phone and microphone to gather; Correlation acquisition equipment also can be an independently equipment, such as, on desktop computer external camera, microphone etc.Depending on concrete scene, can not limit this present embodiment for which kind of means to gather user feedback behavior by, these acquisition means that can use herein all do not deviate from spirit of the present disclosure and protection domain.
For dissimilar video, the behavior that user feeds back generally also also exists difference, so can gather targetedly in conjunction with video type when gathering.Such as:
In the present embodiment or the disclosure some other embodiments, the described video type according to current video, gathers user feedback behavioural information, can comprise:
When described video type is comedy, gather the laugh of user and/or smile's expression of user;
When described video type is tragedy, gather the sobs of user and/or the sad expression of user;
When described video type be terrible acute time, gather the birdie of user, the startled expression of user and/or user block eye motion.
In step s 102, corresponding in described current video to described user feedback behavioural information video segment marks.
The object of detection video is exactly to find splendid contents place fragment in this video, when just marking after having found this wonderful position by gathering user feedback behavioural information, as the achievement of detection.
Because video segment can position with the time end to end of this fragment usually, so:
In the present embodiment or the disclosure some other embodiments, the described video segment corresponding in described current video to described user feedback behavioural information marks, and comprising:
Record the start/stop time of described user feedback behavioural information video segment corresponding in described current video.
In the disclosure some other embodiments, also can mark video segment by other modes, such as, record the frame that initial sum stops, etc., this disclosure is not limited.
For a video, multistage wonderful can be marked, and these wonderfuls can derive from a user, also can from multiple user.Exemplarily, can join and be shown in Table 1:
Table 1
See table 1, for example, within certain day, user user_a have viewed video " Flirting Scholar " in a terminal, according to its feedback behavior, system marks three sections of featured videos, and another day user user_b also have viewed this video in another terminal, feed back behavior system marks another section of featured videos according to it.These four sections of video segments can together as " Flirting Scholar " that detect featured videos record in Table 1.
In addition, in the present embodiment or the disclosure some other embodiments, described method can also comprise:
Obtain the video type of current video.
Can obtain the type of this video according to the grouping at video place, the attribute information that also can carry according to video file obtains the type of this video, etc., the present embodiment repeats no more.
In addition, in the present embodiment or the disclosure some other embodiments, described method can also comprise:
After the described video segment corresponding in described current video to described user feedback behavioural information marks:
According to the described video segment generating video recommendation information be labeled, to send to other users.
Such as, the beginning and ending time of four sections of wonderfuls in " Flirting Scholar " according to the record of table 1, can be packaged into recommendation information by system, or even these four sections of wonderfuls itself are packaged into recommendation information, sends to other users.
In the disclosed embodiments, the feedback behavior made when user being watched video is as the key message of the video segment position at identification video core content place, when implementing, according to the corresponding user feedback behavior of video type collection, when collecting corresponding feedback behavior, namely represent this video segment corresponding to feedback behavior and belong to core content, mark, so just detect the position at video core content place, contribute to the service efficiency improving video, save the time of user's screening, browsing video.
Below in conjunction with concrete scene, disclosure embodiment is conducted further description.
Fig. 2 is the process flow diagram of a kind of video content detection method according to an exemplary embodiment.
In step s 201, detect that user opens video, open collecting device.
In step S202, collecting device is in running order, catches the feedback behavior of user at any time.
In step S203, when collecting device captures the feedback behavior of user, video segment corresponding with this feedback behavior under system log (SYSLOG).
Such as, shown in Figure 3, when user uses notebook computer 300 to watch video, the camera 301 that notebook computer 300 can be utilized to carry gathers the smile of user.
Again such as, shown in Figure 4, when user uses notebook computer 300 to watch video, the microphone 302 that notebook computer 300 also can be utilized to carry gathers the laugh of user.
Again such as, shown in Figure 5, when user uses mobile phone 500 to watch video, the facial expression that the microphone 502 at the front-facing camera 501 at mobile phone 500 apical position place and mobile phone 500 bottom position place gathers user simultaneously and the sound sent can be used.
In step S204, judge whether current video plays end, if so, then enters step S205, if not, then turn back in S202.
In step S205, the beginning and ending time of the video segment recorded and the information such as video name, user ID of correspondence are stored.
In addition, in the present embodiment or the disclosure some other embodiments, can also increase the step of marking to current video further, shown in Figure 6, described method can also comprise:
In step s 601, gather user's operation information, wherein said user's operation information comprises the operation that user does for described current video when watching current video.
In step S602, according to the number of times operated in described user's operation information, for described current video is marked.
User is the another kind of behavior different from the feedback behavior of user for the operation that described current video does.The feedback behavior of user can be regarded as a kind of passive behavior, a kind of spontaneous reaction that to be user make because viewing video content, can from the side on reflect the degree of recognition of user to current video; User then can be regarded as a kind of behavior of active to the operation of current video, can directly embody the degree of recognition of user to current video from front.Just based on this thinking, in the present embodiment, evaluate the excellent degree of current video entirety by investigating the number of times of user to the operation that current video does.
Exemplarily, described operation can comprise:
To the click play operation of current video, and/or, to the drag operation of current video playing progress rate.
There is broadcast button at video playback interface usually, and the click play operation of user to this broadcast button can be shown in Figure 7.If the number of times of user to the click play operation that current video is made is more, within such as one month, reach 7 times, then reflect to a certain extent this video like by user, therefore higher scoring can be given to current video;
Also there is progress bar button at video playback interface usually, and user can be shown in Figure 8 to the drag operation of current video playing progress rate.If user has carried out repeatedly progress drag operation to current video, then reflecting this video has to a certain extent had more boring content, therefore can give lower scoring.
Fig. 9 is the schematic diagram of a kind of video content sniffer according to an exemplary embodiment.This device can be used for terminal.Shown in Figure 9, this device 900 can comprise:
User feedback behavior collecting unit 901, is configured to the video type according to current video, gathers user feedback behavioural information, and wherein said user feedback behavioural information comprises the feedback behavior that user makes when watching current video;
Indexing unit 902, is configured to the video segment corresponding in described current video to described user feedback behavioural information and marks.
Optionally, described user feedback behavior collecting unit 901 can be configured to further:
When described video type is comedy, gather the laugh of user and/or smile's expression of user;
When described video type is tragedy, gather the sobs of user and/or the sad expression of user;
When described video type be terrible acute time, gather the birdie of user, the startled expression of user and/or user block eye motion.
Optionally, described indexing unit 902 can be configured to further:
Record the start/stop time of described user feedback behavioural information video segment corresponding in described current video.
Shown in Figure 10, optionally, described device 900 can also comprise:
Type information acquiring unit 903, is configured to the video type obtaining current video.
Shown in Figure 11, optionally, described device 900 can also comprise:
Recommendation information generation unit 904, is configured to the described video segment generating video recommendation information according to being labeled.
Shown in Figure 12, optionally, described device 900 can also comprise:
User operation collecting unit 905, be configured to gather user's operation information, wherein said user's operation information comprises the operation that user does for described current video when watching current video;
Scoring unit 906, is configured to the number of times according to operating in described user's operation information, for described current video is marked.
About the device in above-described embodiment, wherein the concrete mode of unit executable operations has been described in detail in about the embodiment of the method, will not elaborate explanation herein.
The disclosure embodiment still provides a kind of video content sniffer, and described device comprises processor and the storer for storage of processor executable instruction;
Described processor is configured to:
According to the video type of current video, gather user feedback behavioural information, wherein said user feedback behavioural information comprises the feedback behavior that user makes when watching current video;
The video segment corresponding in described current video to described user feedback behavioural information marks.
The disclosure embodiment still provides a kind of non-transitory computer-readable recording medium, and when the instruction in described storage medium is performed by the processor of mobile terminal, make mobile terminal can perform a kind of video content detection method, described method comprises:
According to the video type of current video, gather user feedback behavioural information, wherein said user feedback behavioural information comprises the feedback behavior that user makes when watching current video;
The video segment corresponding in described current video to described user feedback behavioural information marks.
Figure 13 is the block diagram of a kind of device for video content detection according to an exemplary embodiment.Such as, this device 1800 can be mobile phone, computing machine, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Figure 13, device 1800 can comprise following one or more assembly: processing components 1802, storer 1804, power supply module 1806, multimedia groupware 1808, audio-frequency assembly 1810, the interface 1812 of I/O (I/O), sensor module 1814, and communications component 1816.
The integrated operation of the usual control device 1800 of processing components 1802, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 1802 can comprise one or more processor 1820 to perform instruction, to complete all or part of step in each embodiment of the method above-mentioned.In addition, processing components 1802 can comprise one or more module, and what be convenient between processing components 1802 and other assemblies is mutual.Such as, processing components 1802 can comprise multi-media module, mutual with what facilitate between multimedia groupware 1808 and processing components 1802.
Storer 1804 is configured to store various types of data to be supported in the operation of device 1800.The example of these data comprises for any application program of operation on device 1800 or the instruction of method, contact data, telephone book data, message, picture, video etc.Storer 1804 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read-only memory (prom), ROM (read-only memory) (ROM), magnetic store, flash memory, disk or CD.
The various assemblies that power supply module 1806 is device 1800 provide electric power.Power supply module 1806 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 1800 and be associated.
Multimedia groupware 1808 is included in the screen providing an output interface between described device 1800 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 1808 comprises a front-facing camera and/or post-positioned pick-up head.When device 1800 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 1810 is configured to export and/or input audio signal.Such as, audio-frequency assembly 1810 comprises a microphone (MIC), and when device 1800 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The sound signal received can be stored in storer 1804 further or be sent via communications component 1816.In certain embodiments, audio-frequency assembly 1810 also comprises a loudspeaker, for output audio signal.
I/O interface 1812 is for providing interface between processing components 1802 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor module 1814 comprises one or more sensor, for providing the state estimation of various aspects for device 1800.Such as, sensor module 1814 can detect the opening/closing state of device 1800, the relative positioning of assembly, such as described assembly is display and the keypad of device 1800, the position of all right pick-up unit 1800 of sensor module 1814 or device 1800 assemblies changes, the presence or absence that user contacts with device 1800, the temperature variation of device 1800 orientation or acceleration/deceleration and device 1800.Sensor module 1814 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor module 1814 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor module 1814 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 1816 is configured to the communication being convenient to wired or wireless mode between device 1800 and other equipment.Device 1800 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 1816 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 1816 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 1800 can be realized, for performing said method by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD) (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as comprise the storer 1804 of instruction, above-mentioned instruction can have been performed all or part of step in each embodiment of the method above-mentioned by the processor 1820 of device 1800.Such as, described non-transitory computer-readable recording medium can be ROM, random-access memory (ram), CD-ROM, tape, floppy disk and optical data storage devices etc.
Those skilled in the art, at consideration instructions and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The disclosure is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Instructions and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (15)

CN201410126946.2A2014-03-312014-03-31Video content detecting method and devicePendingCN104951479A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410126946.2ACN104951479A (en)2014-03-312014-03-31Video content detecting method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410126946.2ACN104951479A (en)2014-03-312014-03-31Video content detecting method and device

Publications (1)

Publication NumberPublication Date
CN104951479Atrue CN104951479A (en)2015-09-30

Family

ID=54166142

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410126946.2APendingCN104951479A (en)2014-03-312014-03-31Video content detecting method and device

Country Status (1)

CountryLink
CN (1)CN104951479A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105657470A (en)*2015-12-312016-06-08深圳市海云天科技股份有限公司Bitmap type video browsing recording method and system
CN105872765A (en)*2015-12-292016-08-17乐视致新电子科技(天津)有限公司Method, device and system for making video collection, and electronic device and server
CN105898374A (en)*2016-06-012016-08-24北京微影时代科技有限公司Film highlight prompting method and system, and mobile terminal
CN106507143A (en)*2016-10-212017-03-15北京小米移动软件有限公司 Video recommendation method and device
CN106803987A (en)*2015-11-262017-06-06腾讯科技(深圳)有限公司The acquisition methods of video data, device and system
CN107071534A (en)*2017-03-172017-08-18深圳市九洲电器有限公司A kind of user and the interactive method and system of set top box
CN107105318A (en)*2017-03-212017-08-29华为技术有限公司A kind of video hotspot fragment extracting method, user equipment and server
CN107122456A (en)*2017-04-262017-09-01合信息技术(北京)有限公司The method and apparatus for showing video search result
CN107247733A (en)*2017-05-052017-10-13中广热点云科技有限公司A kind of video segment viewing temperature analysis method and system
CN107918482A (en)*2016-10-082018-04-17天津锋时互动科技有限公司深圳分公司The method and system of overstimulation is avoided in immersion VR systems
CN108210286A (en)*2017-09-262018-06-29深圳市超级人生科技有限公司A kind of motion control method of massager, apparatus and system
CN109246467A (en)*2018-08-152019-01-18上海蔚来汽车有限公司Label is to the method, apparatus of sharing video frequency, video camera and smart phone
CN110096613A (en)*2019-04-122019-08-06北京奇艺世纪科技有限公司A kind of video recommendation method, device, electronic equipment and storage medium
CN111209436A (en)*2020-01-102020-05-29上海摩象网络科技有限公司Method and device for shooting material mark and electronic equipment
CN113569093A (en)*2021-01-182021-10-29腾讯科技(深圳)有限公司 Video processing method, apparatus, computer equipment and storage medium
US11496544B2 (en)*2015-05-052022-11-08Snap Inc.Story and sub-story navigation
US11803345B2 (en)2014-12-192023-10-31Snap Inc.Gallery of messages from individuals with a shared interest
US11902287B2 (en)2015-03-182024-02-13Snap Inc.Geo-fence authorization provisioning

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101420579A (en)*2007-10-222009-04-29皇家飞利浦电子股份有限公司Method, apparatus and system for detecting exciting part
CN102693739A (en)*2011-03-242012-09-26腾讯科技(深圳)有限公司Method and system for video clip generation
CN102957950A (en)*2012-07-232013-03-06华东师范大学User implicit rating method for recommending video
CN103365936A (en)*2012-03-302013-10-23财团法人资讯工业策进会Video recommendation system and method thereof
CN103594104A (en)*2012-08-152014-02-19腾讯科技(深圳)有限公司Method and system for acquiring multimedia interest point, method and device for multimedia playing
CN103686223A (en)*2012-09-112014-03-26上海聚力传媒技术有限公司Method and equipment for providing video access service according to user feedback information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101420579A (en)*2007-10-222009-04-29皇家飞利浦电子股份有限公司Method, apparatus and system for detecting exciting part
CN102693739A (en)*2011-03-242012-09-26腾讯科技(深圳)有限公司Method and system for video clip generation
CN103365936A (en)*2012-03-302013-10-23财团法人资讯工业策进会Video recommendation system and method thereof
CN102957950A (en)*2012-07-232013-03-06华东师范大学User implicit rating method for recommending video
CN103594104A (en)*2012-08-152014-02-19腾讯科技(深圳)有限公司Method and system for acquiring multimedia interest point, method and device for multimedia playing
CN103686223A (en)*2012-09-112014-03-26上海聚力传媒技术有限公司Method and equipment for providing video access service according to user feedback information

Cited By (26)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11803345B2 (en)2014-12-192023-10-31Snap Inc.Gallery of messages from individuals with a shared interest
US12236148B2 (en)2014-12-192025-02-25Snap Inc.Gallery of messages from individuals with a shared interest
US12231437B2 (en)2015-03-182025-02-18Snap Inc.Geo-fence authorization provisioning
US11902287B2 (en)2015-03-182024-02-13Snap Inc.Geo-fence authorization provisioning
US11496544B2 (en)*2015-05-052022-11-08Snap Inc.Story and sub-story navigation
CN106803987A (en)*2015-11-262017-06-06腾讯科技(深圳)有限公司The acquisition methods of video data, device and system
CN105872765A (en)*2015-12-292016-08-17乐视致新电子科技(天津)有限公司Method, device and system for making video collection, and electronic device and server
CN105657470B (en)*2015-12-312018-09-28深圳市海云天科技股份有限公司A kind of recording method of bit map type video tour and system
CN105657470A (en)*2015-12-312016-06-08深圳市海云天科技股份有限公司Bitmap type video browsing recording method and system
CN105898374A (en)*2016-06-012016-08-24北京微影时代科技有限公司Film highlight prompting method and system, and mobile terminal
CN107918482A (en)*2016-10-082018-04-17天津锋时互动科技有限公司深圳分公司The method and system of overstimulation is avoided in immersion VR systems
CN107918482B (en)*2016-10-082023-12-12深圳思蓝智创科技有限公司Method and system for avoiding overstimulation in immersive VR system
CN106507143A (en)*2016-10-212017-03-15北京小米移动软件有限公司 Video recommendation method and device
CN107071534A (en)*2017-03-172017-08-18深圳市九洲电器有限公司A kind of user and the interactive method and system of set top box
CN107071534B (en)*2017-03-172019-12-10深圳市九洲电器有限公司Method and system for interaction between user and set top box
US11265624B2 (en)2017-03-212022-03-01Huawei Technologies Co., Ltd.Hot video clip extraction method, user equipment, and server
CN107105318B (en)*2017-03-212021-01-29华为技术有限公司Video hotspot segment extraction method, user equipment and server
CN107105318A (en)*2017-03-212017-08-29华为技术有限公司A kind of video hotspot fragment extracting method, user equipment and server
CN107122456A (en)*2017-04-262017-09-01合信息技术(北京)有限公司The method and apparatus for showing video search result
CN107247733A (en)*2017-05-052017-10-13中广热点云科技有限公司A kind of video segment viewing temperature analysis method and system
CN108210286A (en)*2017-09-262018-06-29深圳市超级人生科技有限公司A kind of motion control method of massager, apparatus and system
CN109246467A (en)*2018-08-152019-01-18上海蔚来汽车有限公司Label is to the method, apparatus of sharing video frequency, video camera and smart phone
CN110096613A (en)*2019-04-122019-08-06北京奇艺世纪科技有限公司A kind of video recommendation method, device, electronic equipment and storage medium
CN111209436A (en)*2020-01-102020-05-29上海摩象网络科技有限公司Method and device for shooting material mark and electronic equipment
CN113569093A (en)*2021-01-182021-10-29腾讯科技(深圳)有限公司 Video processing method, apparatus, computer equipment and storage medium
CN113569093B (en)*2021-01-182024-12-06腾讯科技(深圳)有限公司 Video processing method, device, computer equipment and storage medium

Similar Documents

PublicationPublication DateTitle
CN104951479A (en)Video content detecting method and device
CN105426152A (en)Bullet screen display method and device
CN104793846A (en)Displaying method and device for interface of application program
CN106331761A (en) Live list display method and device
CN104699958A (en)Method and device for recommending menu according to physical status of user
CN105512909A (en)Commodity recommendation method apparatus
CN105120337A (en)Video special effect processing method, video special effect processing device and terminal equipment
CN104636164B (en)Start page generation method and device
CN104980820A (en)Multimedia file playing method and multimedia file playing device
CN105205133A (en)Information collection method and device
CN103973900B (en)The method of transmission information and device
CN105204804A (en)Mode switching method, device and system
CN104079964B (en)The method and device of transmission of video information
CN105654533A (en)Picture editing method and picture editing device
CN104391711A (en)Method and device for setting screen protection
CN103970576A (en)Installation information displaying method, obtaining method and device
CN105049269A (en)Information feedback method and device
CN107562349A (en)A kind of method and apparatus for performing processing
CN105224171A (en)icon display method, device and terminal
CN105354017A (en)Information processing method and apparatus
CN104572875A (en)Popularization information putting effectiveness determining method and device
CN106375178A (en)Message display method and device based on instant messaging
CN104731615A (en)Intelligent device configuration method and device
CN105224168A (en)The display packing of application icon, device and mobile device
CN104639609A (en)Method and device for sharing network

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20150930


[8]ページ先頭

©2009-2025 Movatter.jp