技术领域technical field
本发明属于通信技术领域,涉及一种基于复杂网络度量的内容中心网络缓存方法。The invention belongs to the technical field of communication, and relates to a content-centric network caching method based on complex network metrics.
背景技术Background technique
网内缓存机制是内容中心网络(ContentCentricNetwork,CCN)的核心技术之一。通过在网络内节点上缓存部分内容,使得内容请求能够使用最近的缓存副本而无需通过寻址找到主机后再获取相应内容,因而能有效降低内容获取的时间延迟,同时降低网络中相同内容的流量大小,从而提高网络的性能。In-network caching mechanism is one of the core technologies of Content Centric Network (CCN). By caching part of the content on the nodes in the network, the content request can use the latest cached copy without having to find the host through addressing and then obtain the corresponding content, thus effectively reducing the time delay of content acquisition and reducing the traffic of the same content in the network size, thereby improving the performance of the network.
CCN的缓存是对应用透明并且普遍存在的。在传统的缓存方案是当内容从提供者返回时,路径上的所有节点均对缓存该内容,这种“普遍”的缓存策略导致缓存节点之间存在冗余数据,降低了缓存内容的多样性,致使缓存资源的利用率下降。CCN缓存技术的研究致力于提出各种具体的新型技术方案和缓存策略,以提升缓存系统的整体性能。为了解决CCN处处缓存机带来的资源浪费等问题,国内外学者进行了大量研究。目前,缓存策略主要分为缓存共享和缓存决策两个方面。CCN's cache is transparent to applications and ubiquitous. In the traditional caching scheme, when the content is returned from the provider, all nodes on the path cache the content. This "universal" caching strategy leads to redundant data among caching nodes, which reduces the diversity of cached content. , resulting in a decrease in the utilization of cache resources. The research on CCN caching technology is dedicated to proposing various specific new technical solutions and caching strategies to improve the overall performance of the caching system. In order to solve the problem of wasting resources caused by CCN cache machines everywhere, scholars at home and abroad have conducted a lot of research. At present, the cache strategy is mainly divided into two aspects: cache sharing and cache decision-making.
缓存共享:不同类型的流量和应用具有不同的特征,如何为不同流量提供区别化的缓存服务是一个亟待解决的问题。为了实现区别化缓存服务,缓存共享技术则是最重要的一部分之一。目前缓存共享技术分为基于固定划分的缓存共享和动态划分的缓存共享两种。固定划分的缓存共享将缓存空间划分为固定的部分让每一个不同类别的应用都可以使用到不会被其他流量占用的缓存。这种方案存在的问题有两点:第一,当某类型的流量未到达,而其他流量较多时,或产生缓存丢失和资源浪费。第二,难以保证为不同类型的流量提供不同的缓存质量保证。动态划分的缓存共享则可以让某个流量类型使用未被占用的缓存空间。这又包含两种不同的策略:基于优先级的共享和基于权重平衡的共享。基于优先级的共享会让某些应用相对于其他应用拥有更高的优先级,并通过移除低优先级的内容来给高优先级内容腾出空间。这种策略的问题在于在数据高速到达时,反复比较优先级会严重影响性能。基于权重平衡的共享会预先设定权重,不过仍可以使用未被使用的空间,难点在于如何优化权重。Cache sharing: Different types of traffic and applications have different characteristics. How to provide differentiated cache services for different traffic is an urgent problem to be solved. In order to achieve differentiated cache services, cache sharing technology is one of the most important parts. Currently, cache sharing technologies are classified into two types: cache sharing based on fixed partitioning and cache sharing based on dynamic partitioning. Fixed-divided cache sharing divides the cache space into fixed parts so that each application of different categories can use the cache that will not be occupied by other traffic. There are two problems with this solution: First, when a certain type of traffic does not arrive and other traffic is large, cache loss and resource waste may occur. Second, it is difficult to guarantee different cache quality guarantees for different types of traffic. Dynamically partitioned cache sharing allows a traffic type to use unoccupied cache space. This in turn consists of two different strategies: priority-based sharing and weight-balancing based sharing. Priority-based sharing gives certain apps a higher priority than others and makes room for higher-priority content by removing lower-priority content. The problem with this strategy is that when data arrives at high speed, repeated priority comparisons can seriously affect performance. Sharing based on weight balance presets weights, but unused space can still be used. The difficulty lies in how to optimize the weights.
缓存决策:缓存决策机制决定了哪个内容需要被储存在哪个节点上,分为非协作式缓存决策和协作式缓存决策两大类。非协作式缓存决策不需要预先知道网络中其他缓点节点的状态信息。比较典型的非协作式缓存决策策略主要有LCE(LeaveCopyEverywhere)、LCD(LeaveCopyDown)、MCD(MoveCopyDown)、Prob(CopywithProbability)和ProbCache(ProbabilisticCache)等。LCE是CCN中默认的缓存决策策略,该策略要求在数据包返回路径上的所有路由节点均需缓存内容对象,这样会导致网络中出现大量缓存内容冗余,降低缓存内容的多样性。LCD使得内容对象只缓存在其所在节点下一跳节点,内容对象被多次请求后才能到达网络边缘节点,而且会在路径上产生大量缓存内容冗余。MCD在缓存命中时将缓存内容从命中节点向下游移动一位(源服务器除外),由此减少请求者到内容服务器的路径上的缓存内容冗余,但当请求者来自不同的边缘网络,会出现内容缓存点的摇摆,这种动态性会产生更多的网络开销。Prob要求数据包返回路径上的所有路由节点都以固定的概率P缓存对象,P的值可以依据缓存情况进行调整。ProbCache中请求的对象根据概率存放在每个节点中但概率都不同,概率与请求节点的距离成反比,因此如果节点越近,则缓存概率越大,反之则越小。该策略能够快速将拷贝推送到网络边缘,同时减少拷贝数量。协作式缓存决策中,网络拓扑和节点状态都是预先知道的。通过这些信息的输入来计算最终的缓存位置。根据参与决策的节点的范围,可以分为全局协调,路径协调和邻近协调三种。全局协调是指网络内所有缓存节点都会被考虑,这样必须预先知道整个网络的拓扑。路径协调是指这种协调仅仅涉及到从请求者到服务者沿路上的缓存节点。邻近协调是指协调只在节点的邻接节点间发生。网内协调作为一种基于哈希函数的方法,也归于邻近协调,它是使用一个哈希函数来决定哪些近邻来缓存某个文件块。Cache decision: The cache decision mechanism determines which content needs to be stored on which node, which can be divided into two categories: non-cooperative cache decision and cooperative cache decision. Non-cooperative caching decisions do not require prior knowledge of the state information of other caching nodes in the network. Typical non-cooperative cache decision-making strategies mainly include LCE (LeaveCopyEverywhere), LCD (LeaveCopyDown), MCD (MoveCopyDown), Prob (CopywithProbability) and ProbCache (ProbabilisticCache). LCE is the default cache decision strategy in CCN, which requires all routing nodes on the return path of data packets to cache content objects, which will lead to a large number of cache content redundancy in the network and reduce the diversity of cache content. LCD enables the content object to be cached only at the next hop node of the node where it is located. The content object can only reach the edge node of the network after being requested many times, and a large amount of cache content redundancy will be generated on the path. When the cache hits, the MCD moves the cached content from the hit node downstream one bit (except the source server), thereby reducing the redundancy of the cached content on the path from the requester to the content server, but when the requester comes from a different edge network, it will This dynamic creates more network overhead as there is a swing of content caching points. Prob requires that all routing nodes on the return path of the data packet cache the object with a fixed probability P, and the value of P can be adjusted according to the cache situation. The objects requested in ProbCache are stored in each node according to the probability, but the probability is different. The probability is inversely proportional to the distance of the requesting node. Therefore, if the node is closer, the cache probability is greater, and vice versa. This strategy is able to quickly push copies to the edge of the network while reducing the number of copies. In cooperative cache decision-making, both the network topology and node states are known in advance. The final cache location is calculated through the input of these information. According to the range of nodes involved in decision-making, it can be divided into three types: global coordination, path coordination and adjacent coordination. Global coordination means that all cache nodes in the network will be considered, so the topology of the entire network must be known in advance. Path coordination means that this coordination only involves cache nodes along the way from the requester to the server. Adjacent coordination means that the coordination occurs only among the adjacent nodes of the node. Intra-network coordination, as a method based on hash functions, also belongs to proximity coordination, which uses a hash function to determine which neighbors cache a certain file block.
综上所述:当前的内容中心网络缓存策略仍然存在以下问题:同质化缓存:非协作式缓存决策中,各个节点独立地缓存和替换将会导致各节点缓存相同的内容;内容在空间分布上太集中或太分散,这样将会导致请求者需要从过于集中的或分散的节点上获取内容,导致流量不合理;在时间上分布不合理,在热门时间里,每个节点都缓存相同的内容,然而热门时间一过,该内容在各节点上又几乎同时消失了。缓存命中率较低:非协作式缓存决策中各节点之间互相不知道对方的缓存内容;而在协作式缓存中,即使各节点互相知道对方缓存的内容,但没有时间承诺,并且各节点的替换是独立的,内容随时有可能被替换掉。这使得缓存的作用具有一定的随机性和偶然性,Interest的转发效率较低。To sum up: the current content-centric network caching strategy still has the following problems: Homogeneous caching: In non-cooperative caching decision-making, each node caches and replaces independently will cause each node to cache the same content; content is distributed in space Too centralized or too decentralized, which will cause requesters to obtain content from too centralized or dispersed nodes, resulting in unreasonable traffic; unreasonable time distribution, in popular time, each node caches the same However, once the popular time passes, the content disappears almost simultaneously on each node. The cache hit rate is low: in non-cooperative cache decision-making, each node does not know each other's cache content; in cooperative cache, even though each node knows each other's cache content, there is no time commitment, and each node's Replacement is independent, content may be replaced at any time. This makes the role of the cache somewhat random and accidental, and the forwarding efficiency of Interest is low.
发明内容Contents of the invention
有鉴于此,本发明的目的在于提供一种基于复杂网络度量的内容中心网络缓存方法,该方法可以解决内容中心网络现有缓存机制存在的同质化缓存和缓存命中率低等问题,该方法在控制器端利用基于复杂网络度量的缓存算法计算若干较优缓存位置,并通过OpenFlow信道下发缓存命令至缓存节点。In view of this, the object of the present invention is to provide a content-centric network caching method based on complex network metrics, which can solve the problems of homogeneous cache and low cache hit rate in the existing content-centric network caching mechanism. On the controller side, a cache algorithm based on complex network metrics is used to calculate several optimal cache positions, and the cache command is sent to the cache nodes through the OpenFlow channel.
为达到上述目的,本发明提供如下技术方案:To achieve the above object, the present invention provides the following technical solutions:
一种基于复杂网络度量的内容中心网络缓存方法,该方法利用软件定义网络的控制器统计内容交换机上的某一内容的请求次数,并在控制器端设定阈值选择内容请求次数超过该阈值的所有交换机作为待定缓存点,并在控制器端利用基于复杂网络度量的缓存策略从待定缓存点中选出若干个较优的缓存节点,由控制器通过OpenFlow信道向选出的缓存节点发出主动缓存内容的指令;当内容从内容服务节点返回请求者的过程中,会执行控制器下发的缓存指令,将内容缓存在控制器选出的缓存节点中。A content-centric network caching method based on complex network metrics. The method uses a software-defined network controller to count the number of requests for a certain content on a content switch, and sets a threshold on the controller to select content whose number of requests exceeds the threshold. All the switches are used as undetermined cache points, and the cache strategy based on complex network metrics is used on the controller side to select several better cache nodes from the undetermined cache points, and the controller sends active cache to the selected cache nodes through the OpenFlow channel Content instruction; when the content is returned from the content service node to the requester, the cache instruction issued by the controller will be executed, and the content will be cached in the cache node selected by the controller.
进一步,所述在控制器端利用基于复杂网络度量的缓存策略从待定缓存点中选出若干个较优的缓存节点,具体包括:在复杂网络中,采用三种基本度量衡量节点的重要性:度中心性、临近中心性和介中心性,这三种度量分别考察了重要性的不同方面:Further, using a caching strategy based on complex network metrics on the controller side to select several better caching nodes from undetermined caching points specifically includes: in a complex network, using three basic metrics to measure the importance of nodes: Degree centrality, proximity centrality, and betweenness centrality, these three measures examine different aspects of importance:
度中心性:定义中心程度最简单的方法便是找出一个节点所直接相连的边数,也就是度,一个度很高的节点将会和其他节点有很多连接,在无向图图中,度的定义为:Degree Centrality: The easiest way to define the degree of centrality is to find the number of edges directly connected to a node, that is, the degree. A node with a high degree will have many connections with other nodes. In an undirected graph, Degrees are defined as:
CD(v)=deg(v)CD (v) = deg (v)
临近中心性:临近中心性是由某节点到网络内其他节点最短距离来定义的,更高的临近中心性意味着这个节点离网络内很多节点都更接近,也就更加趋近网络的中心,其公式如下:Proximity centrality: Proximity centrality is defined by the shortest distance between a node and other nodes in the network. A higher proximity centrality means that this node is closer to many nodes in the network, and it is closer to the center of the network. Its formula is as follows:
介中心性:介中心性是由节点出现在其他节点之间最短路径上的情况来决定的,介中心性是衡量节点对信息在整个网络内传播的重要性的度量,有着高介中心性的节点是网络中的关键节点,其定义为:Betweenness centrality: Betweenness centrality is determined by the fact that a node appears on the shortest path between other nodes. Betweenness centrality is a measure of the importance of a node to the spread of information throughout the network, with a high betweenness centrality A node is a key node in the network, which is defined as:
进一步,该方法具体包括以下步骤:Further, the method specifically includes the following steps:
S1:网络中各个交换机将某一内容A的请求次数计数字段统计至控制器中;S1: Each switch in the network counts the count field of the number of requests for a certain content A to the controller;
S2:控制器找出请求内容A次数大于预先设定的阈值T的交换机作为的待定缓存点,并作为样本点;S2: The controller finds out the switch whose number of requests for the content A is greater than the preset threshold T as the undetermined cache point, and takes it as a sample point;
S3:以这些样本交换机作为节点,根据实际的交换机连接情况,构建一个无向图;S3: Using these sample switches as nodes, construct an undirected graph according to the actual switch connections;
S4:分别计算该无向图的三个度量:度中心性、临近中心性和介中心性,以及它们的归一化值ND(NormalizedDegree),NC(NormalizedCloseness),NB(NormalizedBetweenness);S4: Calculate the three metrics of the undirected graph: degree centrality, proximity centrality and betweenness centrality, and their normalized values ND (Normalized Degree), NC (Normalized Closeness), NB (Normalized Betweenness);
S5:根据不同业务的需求,决定三个度量所占的权重分别为α,β和γ,并计算出一个总分,其公式为:S=α*ND+β*NC+γ*NB;S5: According to the needs of different businesses, determine the weights of the three metrics as α, β and γ respectively, and calculate a total score, the formula is: S=α*ND+β*NC+γ*NB;
S6:对总分进行排序,根据不同业务的需求,按照总分排序依次选取交换机作为缓存节点,控制器通过OpenFlow信道向其发出主动缓存指令。S6: Sort the total score, according to the needs of different services, select the switch as the cache node according to the total score sort, and the controller sends an active cache command to it through the OpenFlow channel.
本发明的有益效果在于:本发明所述方法能够很好的解决内容中心网络现有缓存机制存在的同质化缓存和缓存命中率低等问题。The beneficial effect of the present invention is that: the method of the present invention can well solve the problems of homogeneous cache and low cache hit rate existing in the existing cache mechanism of the content-centric network.
附图说明Description of drawings
为了使本发明的目的、技术方案和有益效果更加清楚,本发明提供如下附图进行说明:In order to make the purpose, technical scheme and beneficial effect of the present invention clearer, the present invention provides the following drawings for illustration:
图1为NSFNET节点拓扑图;Figure 1 is a topology diagram of NSFNET nodes;
图2为本发明的实施例实现流程图。Fig. 2 is a flow chart of the implementation of the embodiment of the present invention.
具体实施方式detailed description
下面将结合附图,对本发明的优选实施例进行详细的描述。The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
本发明的目的在于解决内容中心网络现有缓存机制存在的同质化缓存和缓存命中率低等问题,在复杂网络中,有三种基本度量用以衡量节点的重要性:度中心性、临近中心性和介中心性,这三种度量分别考察了重要性的不同方面。The purpose of the present invention is to solve the problems of homogeneous cache and low cache hit rate in the existing cache mechanism of the content-centric network. In complex networks, there are three basic metrics to measure the importance of nodes: degree centrality, proximity center These three measures examine different aspects of importance.
度中心性:定义中心程度最简单的方法便是找出一个节点所直接相连的边数,也就是度。一个度很高的节点将会和其他节点有很多连接。在无向图图中,度的定义为:Degree centrality: The simplest way to define centrality is to find the number of edges directly connected to a node, which is the degree. A node with a high degree will have many connections to other nodes. In an undirected graph, degree is defined as:
CD(v)=deg(v)CD (v) = deg (v)
临近中心性:临近中心性是由某节点到网络内其他节点最短距离来定义的。更高的临近中心性意味着这个节点离网络内很多节点都更接近,也就更加趋近网络的中心。其公式如下:Proximity centrality: Proximity centrality is defined by the shortest distance from a node to other nodes in the network. Higher proximity centrality means that this node is closer to many nodes in the network, and thus closer to the center of the network. Its formula is as follows:
介中心性:介中心性是由节点出现在其他节点之间最短路径上的情况来决定的。介中心性是衡量节点对信息在整个网络内传播的重要性的度量。通常有着高介中心性的节点是网络中的关键节点。其定义为:Betweenness Centrality: Betweenness centrality is determined by the fact that a node occurs on the shortest path between other nodes. Betweenness centrality is a measure of how important a node is to the spread of information throughout the network. Usually nodes with high betweenness centrality are the key nodes in the network. It is defined as:
在基于复杂网络度量的内容中心网络缓存策略中,控制器中的全局网络信息和交换机的集中控制是该缓存策略可行的重要因素。该缓存策略通过对内容请求模式的识别,从中划分各个聚类,能有效得到最优的缓存位置。In the content-centric network caching strategy based on complex network metrics, the global network information in the controller and the centralized control of switches are important factors for the caching strategy to be feasible. The caching strategy can effectively obtain the optimal caching location by identifying content request patterns and dividing them into clusters.
本方法包括以下步骤:S1:网络中各个交换机将某一内容A的请求次数计数字段统计至控制器中;S2:控制器找出请求内容A次数大于预先设定的阈值T的交换机作为的待定缓存点,并作为样本点;S3:以这些样本交换机作为节点,根据实际的交换机连接情况,构建一个无向图;S4:分别计算该无向图的三个度量:度中心性、临近中心性和介中心性,以及它们的归一化值ND(NormalizedDegree),NC(NormalizedCloseness),NB(NormalizedBetweenness);S5:根据不同业务的需求,决定三个度量所占的权重分别为α,β和γ,并计算出一个总分,其公式为:S=α*ND+β*NC+γ*NB;S6:对总分进行排序,根据不同业务的需求,按照总分排序依次选取交换机作为缓存节点,控制器通过OpenFlow信道向其发出主动缓存指令。The method includes the following steps: S1: Each switch in the network counts the counting field of the number of requests for a certain content A into the controller; S2: The controller finds the switch whose number of requests for content A is greater than the preset threshold T as the pending Cache points and serve as sample points; S3: use these sample switches as nodes, and construct an undirected graph according to the actual switch connection; S4: calculate three metrics of the undirected graph: degree centrality and proximity centrality and betweenness centrality, and their normalized values ND(NormalizedDegree), NC(NormalizedCloseness), NB(NormalizedBetweenness); S5: According to different business needs, determine the weights of the three metrics as α, β and γ respectively , and calculate a total score, the formula is: S=α*ND+β*NC+γ*NB; S6: Sort the total score, according to the needs of different services, select the switch as the cache node according to the order of the total score , the controller sends an active caching instruction to it through the OpenFlow channel.
图2为本发明的实施例实现流程图,在以下实施例中,控制信息的传输选择带外链接,可以是以太网链路,IP链路通道。FIG. 2 is a flow chart of an embodiment of the present invention. In the following embodiments, an out-of-band link is selected for transmission of control information, which may be an Ethernet link or an IP link channel.
如图2所示,为本发明基于复杂网络度量的内容中心网络缓存策略获取缓存位置的实施例包括以下步骤:As shown in Figure 2, the embodiment of obtaining the cache location for the content-centric network cache strategy based on complex network metrics of the present invention includes the following steps:
步骤1:网络中所有交换机将内容A在该交换机的请求次数通过控制信道上传至控制器中。Step 1: All switches in the network upload the number of requests for content A on the switch to the controller through the control channel.
步骤2:控制器收集所有交换机上传的计数值,并根据预先设定的阈值T选择计数值大于T的交换机作为的待定缓存点,并作为样本点;根据网络拓扑中这些样本交换机连接情况,构建一个无向图;分别计算该无向图各个点的度中心性、临近中心性和介数,以及它们的归一化值ND,NC和NB;根据不同业务的需求,决定三个度量所占的权重分别为α,β和γ,根据公式S=α*ND+β*NC+γ*NB计算出一个总分,并对总分进行排序。Step 2: The controller collects the count values uploaded by all the switches, and selects the switches whose count value is greater than T according to the preset threshold T as the undetermined cache points as sample points; according to the connection status of these sample switches in the network topology, construct An undirected graph; respectively calculate the degree centrality, proximity centrality, and betweenness of each point of the undirected graph, and their normalized values ND, NC, and NB; according to the needs of different businesses, determine the proportion of the three metrics The weights of α, β and γ are respectively, and a total score is calculated according to the formula S=α*ND+β*NC+γ*NB, and the total scores are sorted.
步骤3:根据不同业务的需求,按照总分排序依次选取交换机作为内容A缓存位置,这里选定的缓存节点为交换机4和交换机6,由控制器通过OpenFlow信道向其发出主动缓存内容A指令(2.1和2.2)。Step 3: According to the needs of different services, switches are selected in order according to the total score as the cache location of content A. The cache nodes selected here are Switch 4 and Switch 6, and the controller sends an active cache content A command to them through the OpenFlow channel ( 2.1 and 2.2).
步骤4:请求者发出请求内容A的Interest信息(2.3),交换机收到该信息后,先检索CS中是否有内容,若有则返回数据,若没有则检索PIT中是否有该Interest的记录,若有则将该Interest的输入端口记录在PIT相应条目中,若没有则创建一个新条目,记录当前Interest输入端口。当检索完PIT后,交换机将检索FIB,若有则将该消息转发至相应的端口,否则转发至除输入端口外的其他所有端口。这里Interest消息在交换机1、4、5、6和10中均获得匹配,最后被转发至提供者(2.4,2.5,2.6,2.7和2.8)。Step 4: The requester sends out the Interest information (2.3) of the request content A. After receiving the information, the switch first searches whether there is content in the CS, and returns the data if there is any, and if not, searches whether there is a record of the Interest in the PIT. If there is, record the input port of the Interest in the corresponding entry of the PIT, if not, create a new entry to record the current Input port of Interest. When the PIT is retrieved, the switch will retrieve the FIB, if any, forward the message to the corresponding port, otherwise forward to all ports except the input port. Here the Interest message is matched in exchanges 1, 4, 5, 6 and 10, and finally forwarded to providers (2.4, 2.5, 2.6, 2.7 and 2.8).
步骤5:提供者收到Interest信息后送出内容A(2.9)。内容A根据交换机10、6、5、4和1的PIT一路回传至请求者处(2.10,2.11,2.12,2.13和2.14),并且在回传的过程中,交换机4和交换机13还会执行等待的主动缓存命令,将内容A缓存在交换机4和交换机6的CS中。Step 5: The provider sends content A (2.9) after receiving the Interest message. Content A is returned to the requester (2.10, 2.11, 2.12, 2.13 and 2.14) according to the PIT of switches 10, 6, 5, 4 and 1, and during the return process, switch 4 and switch 13 will also execute The waiting active cache command caches content A in the CSs of Switch 4 and Switch 6 .
图1为NSFNET节点拓扑图,以NSFNET节点拓扑作为我们的分析对象。假设图中每一个节点都是对内容A请求计数超过阈值的交换机,那么缓存的存放位置应该是在这14个节点当中的一个。下表是通过复杂网络分析得出的每个节点的三种中心度和他们各自的归一化值ND、NC、NB。假设业务A要求缓存位置离请求的跳数越小越好,也就是说对临近中心度要求更高。则设度数的权重α=0.25,临近中心度权重β=0.5,介中心度权重γ=0.25。据此我们再计算总分S,结果如下:Figure 1 is the NSFNET node topology diagram, with the NSFNET node topology as our analysis object. Assuming that each node in the figure is a switch whose request count for content A exceeds the threshold, then the storage location of the cache should be one of the 14 nodes. The following table shows the three centralities of each node and their respective normalized values ND, NC, and NB obtained through complex network analysis. Assume that business A requires that the hops between the cache location and the request be as small as possible, that is to say, the requirement for proximity centrality is higher. Then set the degree weight α=0.25, the proximity centrality weight β=0.5, and the betweenness centrality weight γ=0.25. Based on this, we then calculate the total score S, the result is as follows:
可以看到,根据这种业务的需求及网络组成关系,控制器可以依据结果中各交换机最后的得分依次选择交换机作为缓存节点,通过OpenFlow信道向其发出主动缓存内容的指。It can be seen that, according to the requirements of the business and the network composition relationship, the controller can sequentially select switches as cache nodes according to the final scores of the switches in the results, and send instructions to actively cache content through the OpenFlow channel.
最后说明的是,以上优选实施例仅用以说明本发明的技术方案而非限制,尽管通过上述优选实施例已经对本发明进行了详细的描述,但本领域技术人员应当理解,可以在形式上和细节上对其作出各种各样的改变,而不偏离本发明权利要求书所限定的范围。Finally, it should be noted that the above preferred embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail through the above preferred embodiments, those skilled in the art should understand that it can be described in terms of form and Various changes may be made in the details without departing from the scope of the invention defined by the claims.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610125099.7ACN105721600B (en) | 2016-03-04 | 2016-03-04 | A kind of content center network caching method based on complex network measurement |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610125099.7ACN105721600B (en) | 2016-03-04 | 2016-03-04 | A kind of content center network caching method based on complex network measurement |
| Publication Number | Publication Date |
|---|---|
| CN105721600Atrue CN105721600A (en) | 2016-06-29 |
| CN105721600B CN105721600B (en) | 2018-10-12 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610125099.7AExpired - Fee RelatedCN105721600B (en) | 2016-03-04 | 2016-03-04 | A kind of content center network caching method based on complex network measurement |
| Country | Link |
|---|---|
| CN (1) | CN105721600B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106982248A (en)* | 2017-03-01 | 2017-07-25 | 中国科学院深圳先进技术研究院 | The caching method and device of a kind of content center network |
| CN107105043A (en)* | 2017-04-28 | 2017-08-29 | 西安交通大学 | A kind of content center network caching method based on software defined network |
| CN108347379A (en)* | 2018-02-12 | 2018-07-31 | 重庆邮电大学 | Based on the centrally stored content center network method for routing in region |
| CN109218225A (en)* | 2018-09-21 | 2019-01-15 | 广东工业大学 | A kind of data pack buffer method and system |
| CN109644160A (en)* | 2016-08-25 | 2019-04-16 | 华为技术有限公司 | The mixed method of name resolving and producer's selection is carried out in ICN by being sorted in |
| CN110402567A (en)* | 2016-12-29 | 2019-11-01 | 华为技术有限公司 | Centrality-Based Caching in Information-Centric Networks |
| CN110830298A (en)* | 2019-11-08 | 2020-02-21 | 北京师范大学 | A Measure Method of Targeted Propagation Capability on Complex Networks |
| CN112887943A (en)* | 2021-01-27 | 2021-06-01 | 福州大学 | Cache resource allocation method and system based on centrality |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101370030A (en)* | 2008-09-24 | 2009-02-18 | 东南大学 | Resource load balancing method based on content replication |
| CN101431530A (en)* | 2007-10-26 | 2009-05-13 | 阿尔卡泰尔卢森特公司 | Method for caching content data packets in a cache node |
| EP2317727A1 (en)* | 2009-10-28 | 2011-05-04 | Alcatel Lucent | Method for cache management and devices therefore |
| US20140149533A1 (en)* | 2012-11-27 | 2014-05-29 | Fastly Inc. | Data storage based on content popularity |
| CN104885431A (en)* | 2012-12-13 | 2015-09-02 | 华为技术有限公司 | Content based traffic engineering in software defined information centric networks |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101431530A (en)* | 2007-10-26 | 2009-05-13 | 阿尔卡泰尔卢森特公司 | Method for caching content data packets in a cache node |
| CN101370030A (en)* | 2008-09-24 | 2009-02-18 | 东南大学 | Resource load balancing method based on content replication |
| EP2317727A1 (en)* | 2009-10-28 | 2011-05-04 | Alcatel Lucent | Method for cache management and devices therefore |
| US20140149533A1 (en)* | 2012-11-27 | 2014-05-29 | Fastly Inc. | Data storage based on content popularity |
| CN104885431A (en)* | 2012-12-13 | 2015-09-02 | 华为技术有限公司 | Content based traffic engineering in software defined information centric networks |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109644160A (en)* | 2016-08-25 | 2019-04-16 | 华为技术有限公司 | The mixed method of name resolving and producer's selection is carried out in ICN by being sorted in |
| CN109644160B (en)* | 2016-08-25 | 2020-12-04 | 华为技术有限公司 | A Hybrid Approach to Name Resolution and Producer Selection in ICN by Classification |
| CN110402567B (en)* | 2016-12-29 | 2021-06-01 | 华为技术有限公司 | Centrality-Based Caching in Information-Centric Networks |
| CN110402567A (en)* | 2016-12-29 | 2019-11-01 | 华为技术有限公司 | Centrality-Based Caching in Information-Centric Networks |
| CN106982248B (en)* | 2017-03-01 | 2019-12-13 | 中国科学院深圳先进技术研究院 | A caching method and device for a content-centric network |
| CN106982248A (en)* | 2017-03-01 | 2017-07-25 | 中国科学院深圳先进技术研究院 | The caching method and device of a kind of content center network |
| CN107105043B (en)* | 2017-04-28 | 2019-12-24 | 西安交通大学 | A content-centric network caching method based on software-defined network |
| CN107105043A (en)* | 2017-04-28 | 2017-08-29 | 西安交通大学 | A kind of content center network caching method based on software defined network |
| CN108347379A (en)* | 2018-02-12 | 2018-07-31 | 重庆邮电大学 | Based on the centrally stored content center network method for routing in region |
| CN109218225A (en)* | 2018-09-21 | 2019-01-15 | 广东工业大学 | A kind of data pack buffer method and system |
| CN109218225B (en)* | 2018-09-21 | 2022-02-15 | 广东工业大学 | Method and system for buffering data packets |
| CN110830298A (en)* | 2019-11-08 | 2020-02-21 | 北京师范大学 | A Measure Method of Targeted Propagation Capability on Complex Networks |
| CN112887943A (en)* | 2021-01-27 | 2021-06-01 | 福州大学 | Cache resource allocation method and system based on centrality |
| CN112887943B (en)* | 2021-01-27 | 2022-07-08 | 福州大学 | A method and system for allocating cache resources based on centrality |
| Publication number | Publication date |
|---|---|
| CN105721600B (en) | 2018-10-12 |
| Publication | Publication Date | Title |
|---|---|---|
| CN105721600B (en) | A kind of content center network caching method based on complex network measurement | |
| CN103825823B (en) | Data forwarding method based on different priorities in software-defined network | |
| WO2018152919A1 (en) | Path selection method and system, network acceleration node, and network acceleration system | |
| CN103348630B (en) | For controlling the method selected in multicast network | |
| CN111901236B (en) | Method and system for optimizing openstack cloud network by using dynamic routing | |
| CN106209669A (en) | Towards SDN data center network maximum of probability path stream scheduling method and device | |
| CN111935031B (en) | A traffic optimization method and system based on NDN architecture | |
| CN101321134A (en) | Quality of Service Routing Selection Method under Dynamic Network Conditions | |
| CN108965479B (en) | Domain collaborative caching method and device based on content-centric network | |
| CN104601485A (en) | Network traffic distribution method and routing method for network traffic distribution | |
| CN105743804A (en) | Data flow control method and system | |
| CN111404595A (en) | A method for evaluating the health of space-based network communication satellites | |
| CN105657054B (en) | A kind of content center network caching method based on K mean algorithms | |
| Dehaghani et al. | REF: A novel forwarding strategy in wireless NDN | |
| Lv et al. | A smart ACO‐inspired named data networking forwarding scheme with clustering analysis | |
| Yufei et al. | A centralized control caching strategy based on popularity and betweenness centrality in ccn | |
| WO2021218352A1 (en) | Traffic monitoring method, related device and system | |
| Feng et al. | Cache-filter: A cache permission policy for information-centric networking | |
| Zheng et al. | Optimal proactive cache management in mobile networks | |
| CN106790421B (en) | A community-based two-step caching method for ICN | |
| Kong et al. | Link congestion and lifetime based in-network caching scheme in Information Centric Networking | |
| Li et al. | Improving the transmission control efficiency in content centric networks | |
| CN114793215A (en) | Distributed system based on load balancing mode asynchronous agent | |
| Tang et al. | Knowledge‐based replica deletion scheme using directional anti‐packets for vehicular delay‐tolerant networks | |
| Sati et al. | Energy Efficient Routing Protocol for Mobile Social Sensing Networks |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20181012 |