Movatterモバイル変換


[0]ホーム

URL:


CN101026744A - Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method - Google Patents

Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method
Download PDF

Info

Publication number
CN101026744A
CN101026744ACNA200710096229XACN200710096229ACN101026744ACN 101026744 ACN101026744 ACN 101026744ACN A200710096229X ACNA200710096229X ACN A200710096229XACN 200710096229 ACN200710096229 ACN 200710096229ACN 101026744 ACN101026744 ACN 101026744A
Authority
CN
China
Prior art keywords
media
streaming
station
slice
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA200710096229XA
Other languages
Chinese (zh)
Other versions
CN100579208C (en
Inventor
谢主中
陈俊楷
李继优
喻德
陶宏
彭宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ud Network Co ltd
Ut Starcom China Co ltd
Original Assignee
UTStarcom Telecom Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by UTStarcom Telecom Co LtdfiledCriticalUTStarcom Telecom Co Ltd
Priority to CN200710096229ApriorityCriticalpatent/CN100579208C/en
Publication of CN101026744ApublicationCriticalpatent/CN101026744A/en
Priority to PCT/CN2008/000466prioritypatent/WO2008119235A1/en
Application grantedgrantedCritical
Publication of CN100579208CpublicationCriticalpatent/CN100579208C/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Based on slices of memory buffer, and stream media content stored in magnetic disc in mode of heat degree statistics (HDS), as well as based on intelligent distribution and user configuration in mode of heat degree statistics, the invention realizes sharing slices of media content inside media station and between media stations. Providing stream media services for terminal users as many as possible, the method reduces network flux and IO frequency for accessing magnetic disc as far as possible. The invention also discloses three methods of buffering memory based on HDS of distribution type stream media distribution system (DTSMDS), as well as dispatch and distribution method (DDM) by using slice of media content as memory buffer unit inside media station and between media stations. DTSMDS and DDM increase hit rate of memory buffer for media content, and reduce IO frequency so as to prolong service life of magnetic disc, and raise reliability and stability of system.

Description

Distributed streaming media distribution system and streaming media memory buffering and scheduling distribution method
Technical Field
The invention relates to the field of multimedia network communication, in particular to a high bit rate transmission system of distributed streaming media.
Background
With the development of multimedia network communication technology, high bit rate multimedia streaming, especially high bit rate video streaming, has evolved from handling thousands of synchronous users to millions of users. For example, high bit rate streaming services represented by IPTV have been developed to the stage of millions of users, and conventional distribution by centralized powerful machines or clustered machines has not been able to meet such demands. To this end, chinese patent publication No. CN1713721 has proposed a "distributed multimedia streaming system and method and apparatus for media content distribution".
However, in the prior art, no matter a cluster streaming server or a distributed streaming media distribution system is used, due to the limited memory of the basic unit for streaming service, the hit rate of the cached streaming media file or the slice of the streaming media file (the streaming media program such as a film, a file, a television program, a music piece is divided into smaller segments, which are called "slices") is low, the IO access frequency of the disk is high, and thus the damage rate of the disk is high, and the system maintenance cost is high.
Disclosure of Invention
Aiming at the defects existing in the streaming media system during streaming service, the invention provides a method for sharing and distributing streaming media slices based on Cache (Cache) intelligent management and scheduling of heat statistics in a distributed streaming media system.
The distributed streaming media distribution system is composed of a plurality of areas, wherein one area is a system headquarters, each area comprises a home media station and at least one edge media station, the home media station is used for storing streaming media content, network copying and distributing the stored streaming media content to the edge media station according to a heat statistic mode, the edge media station is communicated with the home media station through a network, and the edge media station stores slices of the most hot streaming media content in a memory buffer and a disk based on user requests and the heat statistic mode so as to provide streaming services.
Wherein, as the area of system headquarters, still include: the media position register is used for recording the position information of the media content slice of the streaming media distribution system; and a media asset manager for managing media assets of the streaming media distribution system; and the content manager is used for being responsible for the content management of the streaming media distribution system.
Wherein the edge media station comprises: a media director for receiving a streaming media service request transmitted from the outside, and determining the position of the slice of the streaming media service in the edge media station; and at least one media engine, which is used for storing the slice of the streaming media service in a memory buffer or in a storage manner, realizing the streaming service taking the slice as a streaming service unit, switching the streaming service under the control of the media director, and realizing the slice distribution and sharing between the media director and a home media station or other edge media stations.
Wherein the media director comprises: the stream service director is used for receiving a stream media service request sent from the outside and controlling and switching the stream service of the media engine; the storage manager manages the position and information of the slices of the media contents stored in the magnetic disks in all the media engines in one media station; the intelligent cache manager is used for managing the slice positions and information of the media contents cached in all the memories in the media station; and a DHT node manager based on the DHT, which is used for issuing the slice information of the media content buffered in the media station and receiving the slice information issued by other media stations.
Wherein the media engine comprises: the stream service unit is used for providing stream service taking media content slices as units and carrying out stream service and switching control by matching with a stream service director; the memory cache management unit is used for realizing local content cache management in the media engine and reporting and updating cached media content slice information to the intelligent cache manager and the DHT node manager; and the disk storage unit is used for storing the slices of the media content and forming cluster storage in the media war under the management of the storage manager.
The slices of the media content buffered in the memory are based on heat statistics, and the media content slices with relatively high cache heat are cached to improve the cache hit rate.
The media content slices buffered in the memory of each media engine in the media station realize service switching under the control of the media director so as to achieve slice service sharing.
The information sharing between the media stations is realized by the DHT management mode through the media content slices buffered in the memory between the media stations, and the sharing is realized by the media content slices buffered in the memory between the media stations through the copying mode.
Wherein each of the home media station and the edge media station is an independent cluster streaming server.
The media director is a pair of master-slave dual-backup media directors, the number of the media engines is multiple, the multiple media engines are media engines with the same function, and service load balancing is achieved under the control of the media directors.
In the memory buffering method of the streaming media in the distributed streaming media distribution system, the distributed streaming media distribution system comprises a media station, and the streaming media is subjected to memory buffering in the media station based on heat statistics. The following three methods are included.
First, the memory buffer unit is a slice of the streaming media content, and the heat statistics are based on a mechanism of preset heat and long-time heat statistics. The preset heat and the long-time heat statistics of all the slices are sorted by comparison, and the slices with the long-time heat ranking higher than the preset threshold value are reserved in the memory for buffering.
Secondly, the memory buffer unit is a slice of the streaming media content, and the heat statistics are based on the heat change frequency. The heat change frequencies of all the slices within a certain time are sorted by comparison, and the slices which exceed the heat change frequency of the preset threshold and are ranked at the top are reserved in the memory for buffering.
Thirdly, the memory buffer unit is based on an aging policy which is accessed least in a certain time period in units of data elements and an associated mechanism in the slice, wherein the data elements are a plurality of disk operation units divided by the slice. Each data element is predicted by ranking the most recent access heat and combining the contextual relationship of the data element within the slice, the heat ranking is high and the data elements predicted to be used soon will be retained in memory for buffering due to the intra-slice association.
In the method for scheduling and managing streaming media in a distributed streaming media distribution system according to the present invention, the distributed streaming media distribution system includes a media station, and the media station includes a pair of media directors and at least one media engine, and the method includes: (1) a receiving step in which the media director receives a streaming media request for streaming media content having a plurality of slices from a user; (2) a query step of querying whether a slice of the streaming media content is present in a media engine of the media station; (3) a judging step of judging whether the media engine has the capability of streaming media service or not when the condition that the section exists in the media engine of the media station is inquired; (4) a selecting step of selecting the media engine as a streaming service engine by the media director under the condition that the media engine is judged to have the streaming service capability; and (5) performing streaming service by the selected media engine.
In the executing step, when the streaming service of the slice is close to ending, the selected media engine notifies the media director to execute the querying step, the judging step, the selecting step, and the executing step on the next slice.
In the method for scheduling and distributing streaming media in a distributed streaming media distribution system of the present invention, the media station comprises a pair of media directors and at least one media engine, and the method comprises: (1) a selection step of selecting a target media station for streaming media service of streaming media content having a plurality of slices; (2) a determination step of confirming, by the DHT node in the media director, one or more source media stations where a slice of the streaming media content is located; (3) a request step of selecting the media station with the shortest distance to the target media station from the one or more source media stations to send a copy request according to the DHT table result and the routing position information; (4) a receiving step of receiving the copy request from the target media station in case the media engine of the media station has a streaming media service capability; (5) a copy step of copying the slice to the media engine of the target media station; and (6) an executing step of streaming service by the media engine of the target media station.
Wherein, in the executing step, when the streaming service of the one slice is close to ending, the media engine of the target media station notifies the media director to execute the determining step, the requesting step, the receiving step, the copying step, and the executing step for a next slice.
The distributed streaming media system and the method for caching, scheduling and distributing the media content can greatly improve the hit rate of the streaming media file slices buffered by the memory and effectively reduce the access frequency of the disk IO, thereby prolonging the service life of the disk and ensuring the reliability and stability of the system.
Drawings
The various aspects of the present invention will become more apparent to the reader after reading the detailed description of the invention with reference to the attached drawings. Wherein,
FIG. 1 shows a schematic diagram of a distributed streaming media distribution system of the present invention;
FIG. 2 is a schematic diagram showing the use of an intelligent Cache manager and streaming service switching control within a media station in accordance with the present invention;
FIG. 3 is a schematic diagram illustrating intelligent Cache management based on DHT node topology in the distributed streaming media distribution system according to the present invention;
FIG. 4 is a schematic diagram illustrating sharing of Cache slices and copy among media stations in the distributed streaming media distribution system according to the present invention; and
fig. 5 is a schematic flow chart illustrating that the media station of the present invention performs streaming service by using a Cache integrated management and scheduling method.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a distributed streaming media distribution system based on a network topology architecture, which can implement multi-level streaming media distribution and service.
As shown in fig. 1, the distributed streaming media distribution system is composed of a plurality of areas (i.e.,area 1 to area n), where one area can be used as a system headquarters, and here,area 1 is used as the system headquarters.
In each zone, ahome media station 20 is included and communicates with using the networkOf a plurality of edge media stations 30a1~30an. Thehome media station 20 and the edge media station 30 described abovea1~30anEach comprising a pair of Media Director (MD) A and a plurality of Media Engines (ME) Ba1~BanE.g. media engine Ba1Media engine Ba2Up to media engine Ban. Here, each edge media station 30a1~30anAre all independent cluster streaming servers.
Here, thearea 1 is a system headquarters, and thearea 1 additionally includes, compared with other areas not being system headquarters: a location information Media Location Register (MLR) 11 for recording media content slices of the streaming media distribution system; a Media Asset Manager (MAM) 12 for managing media assets of the streaming media distribution system; and a content manager (CM for short) 13 for taking charge of content management of the streaming media distribution system.
Wherein, the media director a is used for receiving the streaming media service transmitted from the outside, and querying the slice of the streaming media file at the edge media station 30 through the intelligent cache manager (which will be explained in detail as "intelligent cache manager" hereinafter)a1~30anIf so, determining the location of the edge media station; media engine Ba1~BanThe system is used for caching and storing the slices of the streaming media service and realizing the switching and control of the slices according to the capability of the media engine for streaming media service.
Here, the media director a may preferably be a master/slave dual back-up media director. Thus, when the main media director fails, the slave media director can seamlessly receive the service and the user, and the reliability of the system is guaranteed. Media engine Ba1~BanIs a set of load-balanced media engines, i.e. they are media engines with the same functionality, under the schedule of the media director a such that each media engine B isa1~BanIs relatively heavy to avoid over-heavy part of the media engine and over-heavy part of the media engineIs idle, so as to achieve the balance effect.
FIG. 2 is a schematic diagram of the control flow of the present invention using the intelligent Cache manager and the stream service switching in the media station.
As shown in FIG. 2, each edge media station 30a1~30anUsing a plurality of media engines B it hasa1~BanAnd respectively caching a plurality of slices of the hot streaming media file based on heat statistics, and providing the capacity of up to tens of thousands of streaming services for one access slice area.
More specifically, the media director a includes: astreaming service director 43 for receiving a streaming service request transmitted from the outside and controlling and switching a streaming service of the media engine; astorage manager 44 that manages the location and information of the slices of disk-stored media content in all media engines within one media station; an intelligent cache manager 41 for managing the slice positions and information of the media contents cached in all the memories in the media station; and a Distributed Hash table (Distributed Hash table) based DHTnode manager 42 for distributing the memory-buffered media content slice information in the media station and receiving the slice information Distributed from other media stations.
Wherein, the media engine Ba1~BanEach of which contains a Disk storage area (denoted Disk in fig. 2) Da1~DamAnd Cache unit (shown as Cache in FIG. 2) Ca1~CamAnd a stream service unit La1~LamSaid Cache unit Ca1~CamLocal content buffering management within one media engine is implemented and buffered media content slice information is reported and updated to theintelligent cache manager 42 and the DHT node manager 41. The stream service unit La1~LamA streaming service in units of media content slices is provided, and streaming service and switching control are performed in cooperation with a streaming service director. Media engine Ba1~BanThe number of the cells can be increased to more than 100 according to the user size of the cell with high expansibility.
It is noted that at the edge media station 30a1~30anInternalintelligent Cache manager 42 performs Cache management and schedules Cache unit C in each media engine based on heat statisticsa1~CamTo Cache the unit Ca1~CamAnd through the media engine Ba1~BanThe inter-stream service switching and control mechanism and the distributed storage management are combined to realize the edge media station 30a1~30anCluster stream service within.
The media director A and theintelligent Cache manager 42 are both realized by software modules, and the media engine Ba1~BanAs well as by software modules.
The DHT (distributed Hash table) is a common data indexing method, and the DHT distributes the DHT to different places, that is, there is a DHT table at each DHT node to receive information issued by adjacent DHT nodes, so as to form a network to share information.
On the other hand, the Cache management and scheduling based on the heat statistics include the following three modes. More specifically, (1) based on the preset heat and the long window heat statistics, the most popular streaming media file is cached. The method is based on the preset popularity and the long window statistical popularity result when uploading the program, the most popular stream media file is cached in a plurality of media engines in a media station in a slicing mode, the highest Cache priority is provided in the three modes, and the stream media file is mainly the latest popular movie and the stream media program with better audience rating and longer popularity time; (2) and changing the latest and hottest slice of the streaming media file in the Cache based on the heat frequency. The method is characterized in that the slice of the most popular stream media file is quickly captured according to the change of the heat frequency, the three modes have medium Cache priority, and the stream media file is mainly the live broadcast of a Racing or the hot broadcast of a related VOD program caused by the occurrence of an emergency; (3) and (3) Cache management and scheduling when neither the heat statistics nor the heat frequency of the streaming media file reaches the modes (1) and (2). The method adopts an aging strategy of the least recent access and a mechanism of the association in the streaming media file segment, which take the data elements (the slice of the streaming media file is divided into a plurality of disk operating units, which are called as data elements) as units, so as to buffer and retain the data elements with the streaming media file segment association and the high recent access frequency as much as possible, and to eliminate the data elements without the association and with the low recent access frequency in a limited way.
Referring to fig. 2, it is assumed that a user currently requests a streaming service for a specified streaming media file, the streaming media file is divided into m slices, and the m slices are distributed to each coal body engine B of the media station based on Cache allocation and managementa1~BanNow, the working principle of using the intelligent Cache manager 42 in the media station and performing the stream service switching and control will be described in detail for the specific conditions: first, the media director a of the media station receives a streaming service request from a user, and queries the intelligent Cache manager 42 whether the slice 1 of the streaming media file is in each media engine B to which the media station belongsa1~BanPerforming the following steps; second, the smart Cache manager 42 queries and determines that slice 1 is in media engine Ba1And based on the media engine Ba1With the capability of streaming services, media engine B is directed to media director Aa1Selecting to execute the streaming service 1; third, when the streaming service of slice 1 is nearing the end, the media engine Ba1The message that the streaming service of the slice 1 is about to end is fed back to the media director A, and the media director A queries that the next slice (namely the slice 2) exists in the media engine B through the intelligent Cache manager 42a2And based on the media engine Ba2With the ability to stream services, information is returned to media engine Ba1(ii) a Finally, media engine Ba1Immediately switch to media engine B upon ending streaming service for slice 1a2And media engine Ba2Proceed to the media engine Ba1The same virtual IP address and port provide streaming service 2 to the user, with subsequent switchingThe slice flow service (up to flow service m) process also proceeds in sequence as described above. When streaming media files with m slices (streaming services 1-m), a user set top box is not needed to participate in the streaming media files.
Fig. 3 shows a schematic diagram of intelligent Cache management based on DHT node topology in the distributed streaming media distribution system of the present invention.
As shown in fig. 3, in the distributed streaming media distribution system, the home media station a, the home media station b, and the home media station c can directly communicate with each other. More specifically, the home media station a includes anedge media station 1, anedge media station 2, and an edge media station 3; home media station b comprises edge media station 4, edge media station 5 and edge media station 6; the home media station c comprises an edge media station 7, an edge media station 8 and an edge media station 9.
As can be seen from the schematic diagram of the media station shown in fig. 2 using theintelligent Cache manager 42 and the streaming service switching control, the information contained in the DHT node of the home media station a is shared with the DHT nodes of theedge media stations 1, 2, and 3, the information contained in the DHT node of the home media station b is shared with the DHT nodes of the edge media stations 4, 5, and 6, and the information contained in the DHT node of the home media station c is shared with the DHT nodes of the edge media stations 7, 8, and 9.
This fig. 3 will now be described in detail through a DHT node-based process of adding and deleting a slice of a streaming media file. When the information added or deleted by the Cache is required to be issued by the DHT node in theedge media station 1, the information can only be issued to the home media station a to which the edge media station belongs, and the home media station a issues the information added or deleted by the Cache to theedge media stations 2 and 3 except theedge media station 1 as an issuing source, and meanwhile, the home media station a also issues the information to the home media stations b and c directly connected with the home media station a. It should be noted that, after receiving the information issued by the home media station a, theedge media stations 2 and 3 stop broadcasting, and the home media stations b and c continue to issue to their respective subordinate edge media stations. More specifically, home media station b is published to edge media stations 4, 5, and 6, and home media station c is published to edge media stations 7, 8, and 9. To avoid repeated distribution in a loop, the home media stations b and c do not distribute the information distributed by the home media station a to each other, that is, if the distribution source is not the edge media station or itself to which the home media station belongs, the home media station does not distribute the information to other home media stations. Therefore, the Cache content information sharing among the media as shown in fig. 3 has the characteristics of high flexibility and expandability, does not need to be searched when the shared information is used, and can avoid the bottleneck problem of the traditional centralized management.
Fig. 4 shows a schematic diagram of a distributed streaming media distribution system of the present invention for sharing slices and implementing copy between media stations.
Referring to fig. 4, home media station a corresponds to edge media station a1, edge media station a2, and edgemedia station b 1. If the streaming media file requested to perform the streaming service by the user includesslice 1,slice 2 and slice m,slice 1 is cached in the media engine of the edge media station a1,slice 2 is cached in the media engine of the edge media station b1, and slice m is cached in the media engine of the home media station a. When the streaming service is implemented in the edge media station a2, the following process is performed:
(1) confirming, by the DHT node in the media director of the edge media station a2, the media engine whereslice 1 of the streaming media file is located;
(2) selecting the edge media station a1 with the shortest distance from the edge media station a2 to send a request for copyingslice 1 according to the DHT table result and the routing position information;
(3) receiving the copy request from the edge media station a2 in the event that the media engine of the edge media station a1 has copy services capability;
(4)copy slice 1 into the media engine of edge media station a 2;
(5)slice 1 is streamed by the media engine of edge media station a 2;
(6) the above steps (1) to (5) are performed for theslice 2 and the slice m in this order.
It should be noted that although slice m is cached in the media engine of home media site a instead of edge media site a1 or b1, the process of copying slices is exactly the same asslice 1.
Fig. 5 is a schematic flow chart illustrating that the media station of the present invention performs streaming service by using a Cache integrated management and scheduling method.
As shown in fig. 5, the edge media station combines the Cache management and scheduling methods for streaming service in and among the media stations to execute the service for streaming media files, and the specific implementation flow can be embodied by the following steps:
(1) a media director of the edge media station receives a streaming media request of a user (step S500);
(2) the intelligent cache manager searches for the next slice of the streaming media file (step S502), the intelligent cache manager being within the media director of the edge media station;
when a slice is present within the edge media station,
(a) judging and determining whether the slice exists (step S504);
(b) if the slice is in a certain media engine of the edge media station, determining whether the media engine has the capability of performing streaming service (step S506);
(c) if the media engine has the capability of streaming service, designating the media engine caching the slice to perform streaming service (step S508);
(d) when the streaming service for the slice is about to end, returning to the intelligent cache manager (step S510) and executing step (2);
(e) if the media engine in step (b) does not have the ability of performing the streaming service, selecting the media engine with the streaming service ability and the cache space to copy (step S512);
(f) designating the selected media engine for streaming service (step S514); and
(g) when the streaming service is about to end, the server returns to the intelligent cache manager (step S516) and performs step (2). When the slice is not within the edge media station,
(i) querying the home media station and all edge media stations to which the home media station belongs based on the DHT node (step S518);
(ii) judging and determining whether the slice exists (step S520);
(iii) if the slice does not exist in the home media station and the edge media station, reading the slice from the storage system, and selecting the media engine caching the slice for streaming service (step S526), and then returning to the step (2);
(iv) if the slice exists in the home media station or the edge media station, a copy request is sent to the media station which caches the slice (step S522);
(v) judging whether a service permission of the opposite terminal is obtained before timeout (step S524);
(vi) if the service permission is obtained, executing the steps (e) - (g); and
(vii) if the service permission is not obtained, step (iii) is performed.
As described above, the Cache content information sharing at the media station is used to avoid the low efficiency of the traditional centralized management information search, and the Cache content sharing is realized through network copy, so that the purposes of reducing disk IO access and prolonging the service life of the hard disk are achieved.
Hereinbefore, specific embodiments of the present invention are described with reference to the drawings. However, those skilled in the art will appreciate that various modifications and substitutions can be made to the specific embodiments of the present invention without departing from the spirit and scope of the invention. Such modifications and substitutions are intended to be included within the scope of the present invention as defined by the appended claims.

Claims (21)

CN200710096229A2007-03-302007-03-30 Distributed streaming media distribution system and streaming media memory buffering and scheduling distribution methodActiveCN100579208C (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN200710096229ACN100579208C (en)2007-03-302007-03-30 Distributed streaming media distribution system and streaming media memory buffering and scheduling distribution method
PCT/CN2008/000466WO2008119235A1 (en)2007-03-302008-03-10Distribution system for distributing stream media, memory buffer of stream media and distributing method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN200710096229ACN100579208C (en)2007-03-302007-03-30 Distributed streaming media distribution system and streaming media memory buffering and scheduling distribution method

Publications (2)

Publication NumberPublication Date
CN101026744Atrue CN101026744A (en)2007-08-29
CN100579208C CN100579208C (en)2010-01-06

Family

ID=38744583

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN200710096229AActiveCN100579208C (en)2007-03-302007-03-30 Distributed streaming media distribution system and streaming media memory buffering and scheduling distribution method

Country Status (2)

CountryLink
CN (1)CN100579208C (en)
WO (1)WO2008119235A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2008119235A1 (en)*2007-03-302008-10-09Utstarcom Telecom Co., Ltd.Distribution system for distributing stream media, memory buffer of stream media and distributing method
WO2009097716A1 (en)*2008-01-312009-08-13Huawei Technologies Co., Ltd.A method of slicing media content, a method, device and system for providing media content
CN101841553A (en)*2009-03-172010-09-22日电(中国)有限公司Method, user node and server for requesting location information of resources on network
CN101287002B (en)*2008-05-212010-12-29华中科技大学Method for enhancing amount of concurrent media flow of flow media server
CN102123318A (en)*2010-12-172011-07-13曙光信息产业(北京)有限公司IO acceleration method of IPTV application
CN102333120A (en)*2011-09-292012-01-25广东高新兴通信股份有限公司Flow storage system for load balance processing
CN101409823B (en)*2007-10-102012-04-25华为技术有限公司Method, apparatus and system for implementing network personal video recorder
CN102647357A (en)*2012-04-202012-08-22中兴通讯股份有限公司 A method and device for processing content routing
CN101998173B (en)*2009-08-272012-11-07华为技术有限公司Distributed media sharing play controller as well as media play control system and method
CN103036967A (en)*2012-12-102013-04-10北京奇虎科技有限公司 A download management device, method and data download system
CN103051701A (en)*2012-12-172013-04-17北京网康科技有限公司Cache admission method and system
CN103281383A (en)*2013-05-312013-09-04重庆大学Timing sequence recording method for distributed-type data source
CN103905923A (en)*2014-03-202014-07-02深圳市同洲电子股份有限公司Content caching method and device
CN104202650A (en)*2014-09-282014-12-10西安诺瓦电子科技有限公司Streaming media broadcast system and method and LED display screen system
CN105207993A (en)*2015-08-172015-12-30深圳市云宙多媒体技术有限公司Data access and scheduling method in CDN, and system
CN106604043A (en)*2016-12-302017-04-26Ut斯达康(深圳)技术有限公司Internet-based live broadcast method and live broadcast server
CN106648593A (en)*2016-09-292017-05-10乐视控股(北京)有限公司Calendar checking method and device for terminal equipment
CN106708865A (en)*2015-11-162017-05-24杭州华为数字技术有限公司Method and device for accessing window data in stream processing system
CN107566509A (en)*2017-09-192018-01-09广州南翼信息科技有限公司A kind of information issuing system for carrying high-volume terminal
WO2018153237A1 (en)*2017-02-232018-08-30中兴通讯股份有限公司Caching method and system for replaying live broadcast, and playing method and system
CN108574685A (en)*2017-03-142018-09-25华为技术有限公司 A streaming media push method, device and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB2385683A (en)*2002-02-222003-08-27Thirdspace Living LtdDistribution system with content replication
CN1227592C (en)*2002-09-172005-11-16华为技术有限公司Method for managing stream media data
US20050235047A1 (en)*2004-04-162005-10-20Qiang LiMethod and apparatus for a large scale distributed multimedia streaming system and its media content distribution
CN100579208C (en)*2007-03-302010-01-06Ut斯达康通讯有限公司 Distributed streaming media distribution system and streaming media memory buffering and scheduling distribution method

Cited By (30)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2008119235A1 (en)*2007-03-302008-10-09Utstarcom Telecom Co., Ltd.Distribution system for distributing stream media, memory buffer of stream media and distributing method
CN101409823B (en)*2007-10-102012-04-25华为技术有限公司Method, apparatus and system for implementing network personal video recorder
WO2009097716A1 (en)*2008-01-312009-08-13Huawei Technologies Co., Ltd.A method of slicing media content, a method, device and system for providing media content
CN101287002B (en)*2008-05-212010-12-29华中科技大学Method for enhancing amount of concurrent media flow of flow media server
CN101841553B (en)*2009-03-172014-03-12日电(中国)有限公司Method, user node and server for requesting location information of resources on network
CN101841553A (en)*2009-03-172010-09-22日电(中国)有限公司Method, user node and server for requesting location information of resources on network
CN101998173B (en)*2009-08-272012-11-07华为技术有限公司Distributed media sharing play controller as well as media play control system and method
CN102123318A (en)*2010-12-172011-07-13曙光信息产业(北京)有限公司IO acceleration method of IPTV application
CN102123318B (en)*2010-12-172014-04-23曙光信息产业(北京)有限公司IO acceleration method of IPTV application
CN102333120A (en)*2011-09-292012-01-25广东高新兴通信股份有限公司Flow storage system for load balance processing
CN102333120B (en)*2011-09-292014-05-21高新兴科技集团股份有限公司Flow storage system for load balance processing
CN102647357A (en)*2012-04-202012-08-22中兴通讯股份有限公司 A method and device for processing content routing
WO2013155979A1 (en)*2012-04-202013-10-24中兴通讯股份有限公司Method and device for processing content routing
CN106850817A (en)*2012-12-102017-06-13北京奇虎科技有限公司A kind of download management equipment, method and data downloading system
CN103036967A (en)*2012-12-102013-04-10北京奇虎科技有限公司 A download management device, method and data download system
CN103051701B (en)*2012-12-172016-02-17北京网康科技有限公司A kind of buffer memory access method and device
CN103051701A (en)*2012-12-172013-04-17北京网康科技有限公司Cache admission method and system
CN103281383B (en)*2013-05-312016-03-23重庆大学A kind of time sequence information recording method of Based on Distributed data source
CN103281383A (en)*2013-05-312013-09-04重庆大学Timing sequence recording method for distributed-type data source
CN103905923A (en)*2014-03-202014-07-02深圳市同洲电子股份有限公司Content caching method and device
CN104202650A (en)*2014-09-282014-12-10西安诺瓦电子科技有限公司Streaming media broadcast system and method and LED display screen system
CN105207993A (en)*2015-08-172015-12-30深圳市云宙多媒体技术有限公司Data access and scheduling method in CDN, and system
CN106708865A (en)*2015-11-162017-05-24杭州华为数字技术有限公司Method and device for accessing window data in stream processing system
CN106708865B (en)*2015-11-162020-04-03杭州华为数字技术有限公司Method and device for accessing window data in stream processing system
CN106648593A (en)*2016-09-292017-05-10乐视控股(北京)有限公司Calendar checking method and device for terminal equipment
CN106604043A (en)*2016-12-302017-04-26Ut斯达康(深圳)技术有限公司Internet-based live broadcast method and live broadcast server
WO2018153237A1 (en)*2017-02-232018-08-30中兴通讯股份有限公司Caching method and system for replaying live broadcast, and playing method and system
CN108574685A (en)*2017-03-142018-09-25华为技术有限公司 A streaming media push method, device and system
CN107566509A (en)*2017-09-192018-01-09广州南翼信息科技有限公司A kind of information issuing system for carrying high-volume terminal
CN107566509B (en)*2017-09-192020-09-11广州南翼信息科技有限公司Information publishing system capable of bearing large-batch terminals

Also Published As

Publication numberPublication date
WO2008119235A1 (en)2008-10-09
CN100579208C (en)2010-01-06

Similar Documents

PublicationPublication DateTitle
CN101026744A (en)Distributed flow media distribution system, and flow media memory buffer and scheduling distribution method
US7697557B2 (en)Predictive caching content distribution network
US7058014B2 (en)Method and apparatus for generating a large payload file
Thouin et al.Video-on-demand networks: design approaches and future challenges
US20140089594A1 (en)Data processing method, cache node, collaboration controller, and system
WO2009079948A1 (en)A content buffering, querying method and point-to-point media transmitting system
CN102546711B (en)Storage adjustment method, device and system for contents in streaming media system
CN102523285A (en)Storage caching method of object-based distributed file system
CN104618506A (en) A crowdsourcing content distribution network system, method and device
CN101594292A (en)Content delivery method, service redirection method and system, node device
US10326854B2 (en)Method and apparatus for data caching in a communications network
US7886034B1 (en)Adaptive liveness management for robust and efficient peer-to-peer storage
CN101800731B (en)Network transmission management server, network transmission management method and network transmission system
CN101677328A (en)Content-fragment based multimedia distributing system and content-fragment based multimedia distributing method
US20110209184A1 (en)Content distribution method, system, device and media server
CN111050188B (en)Data stream scheduling method, system, device and medium
KR20100073154A (en)Method for data processing and asymmetric clustered distributed file system using the same
CN101727460A (en)Method and system for positioning content fragment
US20090100188A1 (en)Method and system for cluster-wide predictive and selective caching in scalable iptv systems
CN103107944A (en)Content locating method and route equipment
CN102497389A (en)Big umbrella caching algorithm-based stream media coordination caching management method and system for IPTV
CN118842936A (en)Channel distribution scheduling management method and system for IPTV live broadcast service
Zhuo et al.Efficient cache placement scheme for clustered time-shifted TV servers
CN101540884B (en) A construction method of peer-to-peer VoD system based on jump graph
Okada et al.A color-based cooperative caching strategy for time-shifted live video streaming

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
ASSSuccession or assignment of patent right

Owner name:UT SIDAKANG (CHINA) CO. LTD.

Free format text:FORMER OWNER: UT STARCOM COMMUNICATION CO., LTD.

Effective date:20130320

C41Transfer of patent application or patent right or utility model
CORChange of bibliographic data

Free format text:CORRECT: ADDRESS; FROM: 310053 HANGZHOU, ZHEJIANG PROVINCE TO: 100027 DONGCHENG, BEIJING

TR01Transfer of patent right

Effective date of registration:20130320

Address after:Beihai Manhattan building 6 No. 100027 Beijing Dongcheng District, Chaoyangmen North Street 11

Patentee after:UTSTARCOM (CHINA) CO.,LTD.

Address before:310053 No. six, No. 368, Binjiang District Road, Zhejiang, Hangzhou

Patentee before:UTSTARCOM TELECOM Co.,Ltd.

CP01Change in the name or title of a patent holder
CP01Change in the name or title of a patent holder

Address after:100027 11 Floor of Beihai Wantai Building, 6 Chaoyangmen North Street, Dongcheng District, Beijing

Patentee after:UT Starcom (China) Co.,Ltd.

Address before:100027 11 Floor of Beihai Wantai Building, 6 Chaoyangmen North Street, Dongcheng District, Beijing

Patentee before:UTSTARCOM (CHINA) CO.,LTD.

TR01Transfer of patent right

Effective date of registration:20190128

Address after:518000 Lenovo Building, No. 016, Gaoxin Nantong, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, on the east side of the third floor

Patentee after:UD NETWORK CO.,LTD.

Address before:100027 11 Floor of Beihai Wantai Building, 6 Chaoyangmen North Street, Dongcheng District, Beijing

Patentee before:UT Starcom (China) Co.,Ltd.

TR01Transfer of patent right

[8]ページ先頭

©2009-2025 Movatter.jp