Disclosure of Invention
Aiming at the defects existing in the streaming media system during streaming service, the invention provides a method for sharing and distributing streaming media slices based on Cache (Cache) intelligent management and scheduling of heat statistics in a distributed streaming media system.
The distributed streaming media distribution system is composed of a plurality of areas, wherein one area is a system headquarters, each area comprises a home media station and at least one edge media station, the home media station is used for storing streaming media content, network copying and distributing the stored streaming media content to the edge media station according to a heat statistic mode, the edge media station is communicated with the home media station through a network, and the edge media station stores slices of the most hot streaming media content in a memory buffer and a disk based on user requests and the heat statistic mode so as to provide streaming services.
Wherein, as the area of system headquarters, still include: the media position register is used for recording the position information of the media content slice of the streaming media distribution system; and a media asset manager for managing media assets of the streaming media distribution system; and the content manager is used for being responsible for the content management of the streaming media distribution system.
Wherein the edge media station comprises: a media director for receiving a streaming media service request transmitted from the outside, and determining the position of the slice of the streaming media service in the edge media station; and at least one media engine, which is used for storing the slice of the streaming media service in a memory buffer or in a storage manner, realizing the streaming service taking the slice as a streaming service unit, switching the streaming service under the control of the media director, and realizing the slice distribution and sharing between the media director and a home media station or other edge media stations.
Wherein the media director comprises: the stream service director is used for receiving a stream media service request sent from the outside and controlling and switching the stream service of the media engine; the storage manager manages the position and information of the slices of the media contents stored in the magnetic disks in all the media engines in one media station; the intelligent cache manager is used for managing the slice positions and information of the media contents cached in all the memories in the media station; and a DHT node manager based on the DHT, which is used for issuing the slice information of the media content buffered in the media station and receiving the slice information issued by other media stations.
Wherein the media engine comprises: the stream service unit is used for providing stream service taking media content slices as units and carrying out stream service and switching control by matching with a stream service director; the memory cache management unit is used for realizing local content cache management in the media engine and reporting and updating cached media content slice information to the intelligent cache manager and the DHT node manager; and the disk storage unit is used for storing the slices of the media content and forming cluster storage in the media war under the management of the storage manager.
The slices of the media content buffered in the memory are based on heat statistics, and the media content slices with relatively high cache heat are cached to improve the cache hit rate.
The media content slices buffered in the memory of each media engine in the media station realize service switching under the control of the media director so as to achieve slice service sharing.
The information sharing between the media stations is realized by the DHT management mode through the media content slices buffered in the memory between the media stations, and the sharing is realized by the media content slices buffered in the memory between the media stations through the copying mode.
Wherein each of the home media station and the edge media station is an independent cluster streaming server.
The media director is a pair of master-slave dual-backup media directors, the number of the media engines is multiple, the multiple media engines are media engines with the same function, and service load balancing is achieved under the control of the media directors.
In the memory buffering method of the streaming media in the distributed streaming media distribution system, the distributed streaming media distribution system comprises a media station, and the streaming media is subjected to memory buffering in the media station based on heat statistics. The following three methods are included.
First, the memory buffer unit is a slice of the streaming media content, and the heat statistics are based on a mechanism of preset heat and long-time heat statistics. The preset heat and the long-time heat statistics of all the slices are sorted by comparison, and the slices with the long-time heat ranking higher than the preset threshold value are reserved in the memory for buffering.
Secondly, the memory buffer unit is a slice of the streaming media content, and the heat statistics are based on the heat change frequency. The heat change frequencies of all the slices within a certain time are sorted by comparison, and the slices which exceed the heat change frequency of the preset threshold and are ranked at the top are reserved in the memory for buffering.
Thirdly, the memory buffer unit is based on an aging policy which is accessed least in a certain time period in units of data elements and an associated mechanism in the slice, wherein the data elements are a plurality of disk operation units divided by the slice. Each data element is predicted by ranking the most recent access heat and combining the contextual relationship of the data element within the slice, the heat ranking is high and the data elements predicted to be used soon will be retained in memory for buffering due to the intra-slice association.
In the method for scheduling and managing streaming media in a distributed streaming media distribution system according to the present invention, the distributed streaming media distribution system includes a media station, and the media station includes a pair of media directors and at least one media engine, and the method includes: (1) a receiving step in which the media director receives a streaming media request for streaming media content having a plurality of slices from a user; (2) a query step of querying whether a slice of the streaming media content is present in a media engine of the media station; (3) a judging step of judging whether the media engine has the capability of streaming media service or not when the condition that the section exists in the media engine of the media station is inquired; (4) a selecting step of selecting the media engine as a streaming service engine by the media director under the condition that the media engine is judged to have the streaming service capability; and (5) performing streaming service by the selected media engine.
In the executing step, when the streaming service of the slice is close to ending, the selected media engine notifies the media director to execute the querying step, the judging step, the selecting step, and the executing step on the next slice.
In the method for scheduling and distributing streaming media in a distributed streaming media distribution system of the present invention, the media station comprises a pair of media directors and at least one media engine, and the method comprises: (1) a selection step of selecting a target media station for streaming media service of streaming media content having a plurality of slices; (2) a determination step of confirming, by the DHT node in the media director, one or more source media stations where a slice of the streaming media content is located; (3) a request step of selecting the media station with the shortest distance to the target media station from the one or more source media stations to send a copy request according to the DHT table result and the routing position information; (4) a receiving step of receiving the copy request from the target media station in case the media engine of the media station has a streaming media service capability; (5) a copy step of copying the slice to the media engine of the target media station; and (6) an executing step of streaming service by the media engine of the target media station.
Wherein, in the executing step, when the streaming service of the one slice is close to ending, the media engine of the target media station notifies the media director to execute the determining step, the requesting step, the receiving step, the copying step, and the executing step for a next slice.
The distributed streaming media system and the method for caching, scheduling and distributing the media content can greatly improve the hit rate of the streaming media file slices buffered by the memory and effectively reduce the access frequency of the disk IO, thereby prolonging the service life of the disk and ensuring the reliability and stability of the system.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a distributed streaming media distribution system based on a network topology architecture, which can implement multi-level streaming media distribution and service.
As shown in fig. 1, the distributed streaming media distribution system is composed of a plurality of areas (i.e.,area 1 to area n), where one area can be used as a system headquarters, and here,area 1 is used as the system headquarters.
In each zone, ahome media station 20 is included and communicates with using the networkOf a plurality of edge media stations 30a1~30an. Thehome media station 20 and the edge media station 30 described abovea1~30anEach comprising a pair of Media Director (MD) A and a plurality of Media Engines (ME) Ba1~BanE.g. media engine Ba1Media engine Ba2Up to media engine Ban. Here, each edge media station 30a1~30anAre all independent cluster streaming servers.
Here, thearea 1 is a system headquarters, and thearea 1 additionally includes, compared with other areas not being system headquarters: a location information Media Location Register (MLR) 11 for recording media content slices of the streaming media distribution system; a Media Asset Manager (MAM) 12 for managing media assets of the streaming media distribution system; and a content manager (CM for short) 13 for taking charge of content management of the streaming media distribution system.
Wherein, the media director a is used for receiving the streaming media service transmitted from the outside, and querying the slice of the streaming media file at the edge media station 30 through the intelligent cache manager (which will be explained in detail as "intelligent cache manager" hereinafter)a1~30anIf so, determining the location of the edge media station; media engine Ba1~BanThe system is used for caching and storing the slices of the streaming media service and realizing the switching and control of the slices according to the capability of the media engine for streaming media service.
Here, the media director a may preferably be a master/slave dual back-up media director. Thus, when the main media director fails, the slave media director can seamlessly receive the service and the user, and the reliability of the system is guaranteed. Media engine Ba1~BanIs a set of load-balanced media engines, i.e. they are media engines with the same functionality, under the schedule of the media director a such that each media engine B isa1~BanIs relatively heavy to avoid over-heavy part of the media engine and over-heavy part of the media engineIs idle, so as to achieve the balance effect.
FIG. 2 is a schematic diagram of the control flow of the present invention using the intelligent Cache manager and the stream service switching in the media station.
As shown in FIG. 2, each edge media station 30a1~30anUsing a plurality of media engines B it hasa1~BanAnd respectively caching a plurality of slices of the hot streaming media file based on heat statistics, and providing the capacity of up to tens of thousands of streaming services for one access slice area.
More specifically, the media director a includes: astreaming service director 43 for receiving a streaming service request transmitted from the outside and controlling and switching a streaming service of the media engine; astorage manager 44 that manages the location and information of the slices of disk-stored media content in all media engines within one media station; an intelligent cache manager 41 for managing the slice positions and information of the media contents cached in all the memories in the media station; and a Distributed Hash table (Distributed Hash table) based DHTnode manager 42 for distributing the memory-buffered media content slice information in the media station and receiving the slice information Distributed from other media stations.
Wherein, the media engine Ba1~BanEach of which contains a Disk storage area (denoted Disk in fig. 2) Da1~DamAnd Cache unit (shown as Cache in FIG. 2) Ca1~CamAnd a stream service unit La1~LamSaid Cache unit Ca1~CamLocal content buffering management within one media engine is implemented and buffered media content slice information is reported and updated to theintelligent cache manager 42 and the DHT node manager 41. The stream service unit La1~LamA streaming service in units of media content slices is provided, and streaming service and switching control are performed in cooperation with a streaming service director. Media engine Ba1~BanThe number of the cells can be increased to more than 100 according to the user size of the cell with high expansibility.
It is noted that at the edge media station 30a1~30anInternalintelligent Cache manager 42 performs Cache management and schedules Cache unit C in each media engine based on heat statisticsa1~CamTo Cache the unit Ca1~CamAnd through the media engine Ba1~BanThe inter-stream service switching and control mechanism and the distributed storage management are combined to realize the edge media station 30a1~30anCluster stream service within.
The media director A and theintelligent Cache manager 42 are both realized by software modules, and the media engine Ba1~BanAs well as by software modules.
The DHT (distributed Hash table) is a common data indexing method, and the DHT distributes the DHT to different places, that is, there is a DHT table at each DHT node to receive information issued by adjacent DHT nodes, so as to form a network to share information.
On the other hand, the Cache management and scheduling based on the heat statistics include the following three modes. More specifically, (1) based on the preset heat and the long window heat statistics, the most popular streaming media file is cached. The method is based on the preset popularity and the long window statistical popularity result when uploading the program, the most popular stream media file is cached in a plurality of media engines in a media station in a slicing mode, the highest Cache priority is provided in the three modes, and the stream media file is mainly the latest popular movie and the stream media program with better audience rating and longer popularity time; (2) and changing the latest and hottest slice of the streaming media file in the Cache based on the heat frequency. The method is characterized in that the slice of the most popular stream media file is quickly captured according to the change of the heat frequency, the three modes have medium Cache priority, and the stream media file is mainly the live broadcast of a Racing or the hot broadcast of a related VOD program caused by the occurrence of an emergency; (3) and (3) Cache management and scheduling when neither the heat statistics nor the heat frequency of the streaming media file reaches the modes (1) and (2). The method adopts an aging strategy of the least recent access and a mechanism of the association in the streaming media file segment, which take the data elements (the slice of the streaming media file is divided into a plurality of disk operating units, which are called as data elements) as units, so as to buffer and retain the data elements with the streaming media file segment association and the high recent access frequency as much as possible, and to eliminate the data elements without the association and with the low recent access frequency in a limited way.
Referring to fig. 2, it is assumed that a user currently requests a streaming service for a specified streaming media file, the streaming media file is divided into m slices, and the m slices are distributed to each coal body engine B of the media station based on Cache allocation and managementa1~BanNow, the working principle of using the intelligent Cache manager 42 in the media station and performing the stream service switching and control will be described in detail for the specific conditions: first, the media director a of the media station receives a streaming service request from a user, and queries the intelligent Cache manager 42 whether the slice 1 of the streaming media file is in each media engine B to which the media station belongsa1~BanPerforming the following steps; second, the smart Cache manager 42 queries and determines that slice 1 is in media engine Ba1And based on the media engine Ba1With the capability of streaming services, media engine B is directed to media director Aa1Selecting to execute the streaming service 1; third, when the streaming service of slice 1 is nearing the end, the media engine Ba1The message that the streaming service of the slice 1 is about to end is fed back to the media director A, and the media director A queries that the next slice (namely the slice 2) exists in the media engine B through the intelligent Cache manager 42a2And based on the media engine Ba2With the ability to stream services, information is returned to media engine Ba1(ii) a Finally, media engine Ba1Immediately switch to media engine B upon ending streaming service for slice 1a2And media engine Ba2Proceed to the media engine Ba1The same virtual IP address and port provide streaming service 2 to the user, with subsequent switchingThe slice flow service (up to flow service m) process also proceeds in sequence as described above. When streaming media files with m slices (streaming services 1-m), a user set top box is not needed to participate in the streaming media files.
Fig. 3 shows a schematic diagram of intelligent Cache management based on DHT node topology in the distributed streaming media distribution system of the present invention.
As shown in fig. 3, in the distributed streaming media distribution system, the home media station a, the home media station b, and the home media station c can directly communicate with each other. More specifically, the home media station a includes anedge media station 1, anedge media station 2, and an edge media station 3; home media station b comprises edge media station 4, edge media station 5 and edge media station 6; the home media station c comprises an edge media station 7, an edge media station 8 and an edge media station 9.
As can be seen from the schematic diagram of the media station shown in fig. 2 using theintelligent Cache manager 42 and the streaming service switching control, the information contained in the DHT node of the home media station a is shared with the DHT nodes of theedge media stations 1, 2, and 3, the information contained in the DHT node of the home media station b is shared with the DHT nodes of the edge media stations 4, 5, and 6, and the information contained in the DHT node of the home media station c is shared with the DHT nodes of the edge media stations 7, 8, and 9.
This fig. 3 will now be described in detail through a DHT node-based process of adding and deleting a slice of a streaming media file. When the information added or deleted by the Cache is required to be issued by the DHT node in theedge media station 1, the information can only be issued to the home media station a to which the edge media station belongs, and the home media station a issues the information added or deleted by the Cache to theedge media stations 2 and 3 except theedge media station 1 as an issuing source, and meanwhile, the home media station a also issues the information to the home media stations b and c directly connected with the home media station a. It should be noted that, after receiving the information issued by the home media station a, theedge media stations 2 and 3 stop broadcasting, and the home media stations b and c continue to issue to their respective subordinate edge media stations. More specifically, home media station b is published to edge media stations 4, 5, and 6, and home media station c is published to edge media stations 7, 8, and 9. To avoid repeated distribution in a loop, the home media stations b and c do not distribute the information distributed by the home media station a to each other, that is, if the distribution source is not the edge media station or itself to which the home media station belongs, the home media station does not distribute the information to other home media stations. Therefore, the Cache content information sharing among the media as shown in fig. 3 has the characteristics of high flexibility and expandability, does not need to be searched when the shared information is used, and can avoid the bottleneck problem of the traditional centralized management.
Fig. 4 shows a schematic diagram of a distributed streaming media distribution system of the present invention for sharing slices and implementing copy between media stations.
Referring to fig. 4, home media station a corresponds to edge media station a1, edge media station a2, and edgemedia station b 1. If the streaming media file requested to perform the streaming service by the user includesslice 1,slice 2 and slice m,slice 1 is cached in the media engine of the edge media station a1,slice 2 is cached in the media engine of the edge media station b1, and slice m is cached in the media engine of the home media station a. When the streaming service is implemented in the edge media station a2, the following process is performed:
(1) confirming, by the DHT node in the media director of the edge media station a2, the media engine whereslice 1 of the streaming media file is located;
(2) selecting the edge media station a1 with the shortest distance from the edge media station a2 to send a request for copyingslice 1 according to the DHT table result and the routing position information;
(3) receiving the copy request from the edge media station a2 in the event that the media engine of the edge media station a1 has copy services capability;
(4)copy slice 1 into the media engine of edge media station a 2;
(5)slice 1 is streamed by the media engine of edge media station a 2;
(6) the above steps (1) to (5) are performed for theslice 2 and the slice m in this order.
It should be noted that although slice m is cached in the media engine of home media site a instead of edge media site a1 or b1, the process of copying slices is exactly the same asslice 1.
Fig. 5 is a schematic flow chart illustrating that the media station of the present invention performs streaming service by using a Cache integrated management and scheduling method.
As shown in fig. 5, the edge media station combines the Cache management and scheduling methods for streaming service in and among the media stations to execute the service for streaming media files, and the specific implementation flow can be embodied by the following steps:
(1) a media director of the edge media station receives a streaming media request of a user (step S500);
(2) the intelligent cache manager searches for the next slice of the streaming media file (step S502), the intelligent cache manager being within the media director of the edge media station;
when a slice is present within the edge media station,
(a) judging and determining whether the slice exists (step S504);
(b) if the slice is in a certain media engine of the edge media station, determining whether the media engine has the capability of performing streaming service (step S506);
(c) if the media engine has the capability of streaming service, designating the media engine caching the slice to perform streaming service (step S508);
(d) when the streaming service for the slice is about to end, returning to the intelligent cache manager (step S510) and executing step (2);
(e) if the media engine in step (b) does not have the ability of performing the streaming service, selecting the media engine with the streaming service ability and the cache space to copy (step S512);
(f) designating the selected media engine for streaming service (step S514); and
(g) when the streaming service is about to end, the server returns to the intelligent cache manager (step S516) and performs step (2). When the slice is not within the edge media station,
(i) querying the home media station and all edge media stations to which the home media station belongs based on the DHT node (step S518);
(ii) judging and determining whether the slice exists (step S520);
(iii) if the slice does not exist in the home media station and the edge media station, reading the slice from the storage system, and selecting the media engine caching the slice for streaming service (step S526), and then returning to the step (2);
(iv) if the slice exists in the home media station or the edge media station, a copy request is sent to the media station which caches the slice (step S522);
(v) judging whether a service permission of the opposite terminal is obtained before timeout (step S524);
(vi) if the service permission is obtained, executing the steps (e) - (g); and
(vii) if the service permission is not obtained, step (iii) is performed.
As described above, the Cache content information sharing at the media station is used to avoid the low efficiency of the traditional centralized management information search, and the Cache content sharing is realized through network copy, so that the purposes of reducing disk IO access and prolonging the service life of the hard disk are achieved.
Hereinbefore, specific embodiments of the present invention are described with reference to the drawings. However, those skilled in the art will appreciate that various modifications and substitutions can be made to the specific embodiments of the present invention without departing from the spirit and scope of the invention. Such modifications and substitutions are intended to be included within the scope of the present invention as defined by the appended claims.