技术领域Technical field
本发明属于移动通信技术领域,尤其涉及一种并行传输编码缓存的性能优化方案。The invention belongs to the technical field of mobile communications, and in particular relates to a performance optimization solution for parallel transmission coding cache.
背景技术Background technique
在为了尽可能缓解网络传输压力而提出的编码缓存问题中,大多数工作都通过在预缓存阶段和传输阶段创造多播机会来减少通信延迟。目前的大多数研究主要关注基于服务器多播的模型或在其他网络结构中进行的编码缓存问题。其中,多个用户的组织类型、多个用户的缓存空间大小等都是网络模型中可以更改的因素。为了评价整个编码缓存通信系统的优劣,大多数工作都将通信延迟作为评价指标。Among the encoding caching problems proposed to alleviate network transmission pressure as much as possible, most works reduce communication delays by creating multicast opportunities in the precaching phase and transmission phase. Most current research focuses on server multicast-based models or encoding caching problems in other network structures. Among them, the organization type of multiple users, the cache space size of multiple users, etc. are all factors that can be changed in the network model. In order to evaluate the quality of the entire coding cache communication system, most works use communication delay as an evaluation index.
发明内容Contents of the invention
本发明目的在于提供一种并行传输编码缓存的性能优化方案,考虑了基于服务器多播和基于D2D网络的两种不同传输方式,提出一种评价指标为传输时长的,针对并行传输编码缓存问题的性能优化算法,最终得到基于复杂网络模型的并行传输编码缓存方案以解决复杂网络模型中进行编码缓存的技术问题。The purpose of the present invention is to provide a performance optimization solution for parallel transmission coding caching, taking into account two different transmission methods based on server multicast and D2D network-based, and proposing an evaluation index for the problem of parallel transmission coding caching based on the transmission duration. Performance optimization algorithm, and finally a parallel transmission encoding caching solution based on complex network models to solve the technical problems of encoding caching in complex network models.
为解决上述技术问题,本发明的具体技术方案如下:In order to solve the above technical problems, the specific technical solutions of the present invention are as follows:
一种并行传输编码缓存的性能优化方案,包括如下步骤:A performance optimization solution for parallel transmission encoding cache, including the following steps:
步骤1、针对给定用户数K、数据库文件数N、服务器广播信道容量Cs、D2D网络信道容量Cu和各用户不同大小的缓存空间的编码缓存问题,使用总传输时长DT作为编码缓存方案的评价指标;定义文件分配参数l1,其中,l1∈[0,1],表示每个文件被分配给服务器广播编码缓存模式和D2D网络内部编码缓存模式的分配策略,若l1=0,那么所有待传文件任务都由服务器广播编码缓存模式的分配策略进行传输;定义用户缓存空间分配参数ms,表示用户用于服务器广播编码缓存模式分配的缓存空间,以此来表示各用户缓存空间如何划分给服务器广播编码缓存模式和D2D网络内部编码缓存模式的分配策略;Step 1. For the encoding and caching problem given the number of users K, the number of database files N, the server broadcast channel capacity Cs , the D2D network channel capacity Cu and the cache space of different sizes for each user, use the total transmission time DT as the encoding caching scheme. Evaluation index; define the file allocation parameter l1 , where l1 ∈ [0, 1], indicating that each file is allocated to the allocation strategy of the server broadcast encoding cache mode and the D2D network internal encoding cache mode, if l1 =0 , then all file tasks to be transferred are transmitted by the allocation strategy of the server broadcast encoding cache mode; define the user cache space allocation parameter ms , which represents the cache space allocated by the user for the server broadcast encoding cache mode, in order to represent each user's cache How to divide the space into allocation strategies for server broadcast encoding cache mode and D2D network internal encoding cache mode;
步骤2、根据D2D网络内部编码缓存模式和服务器广播编码缓存模式,并利用定义的文件分配参数l1和用户缓存空间分配参数ms,建立并行传输编码缓存实现方案,同时,将并行传输编码缓存实现方案建模为传输时长与文件分配参数l1和用户缓存空间分配参数ms有关的优化问题;解决此优化问题,便得到传输时长的最优解,并对应最优文件分配方案和最优用户缓存分配方案;Step 2. Based on the D2D network internal coding cache mode and server broadcast coding cache mode, and using the defined file allocation parameter l1 and user cache space allocation parameter ms , establish a parallel transmission coding cache implementation plan. At the same time, the parallel transmission coding cache The implementation plan is modeled as an optimization problem related to the transmission duration and the file allocation parameter l1 and the user cache space allocation parameter ms ; solving this optimization problem, the optimal solution of the transmission duration is obtained, and corresponds to the optimal file allocation plan and the optimal User cache allocation plan;
步骤3、根据建模得到的优化问题,使用基于粒子群算法和线性规划的算法进行优化;在粒子群算法的每次迭代中,监控此时的用户缓存分配参数ms,若分配到D2D网络传输方式的用户缓存满足下界可达条件,则可直接得到D2D网络编码缓存的最优传输时长,若分配到D2D网络传输方式的用户缓存不满足下界可达条件,使用线性规划得到D2D网络编码缓存的最优传输时长;粒子群算法多次迭代后,实现对总传输时长DT的优化,其中DT在建模为优化问题时是优化问题的目标函数,在优化算法里是适应度。Step 3. According to the optimization problem obtained by modeling, use the algorithm based on particle swarm algorithm and linear programming to optimize; in each iteration of the particle swarm algorithm, monitor the user cache allocation parameter ms at this time. If it is allocated to the D2D network If the user cache of the transmission mode meets the lower bound reachability condition, the optimal transmission duration of the D2D network coding cache can be directly obtained. If the user cache assigned to the D2D network transmission mode does not meet the lower bound reachability condition, linear programming is used to obtain the D2D network coding cache. The optimal transmission duration; after multiple iterations of the particle swarm algorithm, the optimization of the total transmission duration DT is achieved, where DT is the objective function of the optimization problem when modeled as an optimization problem, and is fitness in the optimization algorithm.
进一步的,步骤2建模得到的与文件分配参数l1和用户缓存空间分配参数ms有关的优化问题描述如下:Further, the optimization problem related to the file allocation parameter l1 and the user cache space allocation parameter ms obtained through the modeling in step 2 is described as follows:
并行传输编码缓存实现方案为:服务器向各用户传递信息,同时用户之间相互传递信息,分别被称为服务器广播编码缓存模式和D2D网络内部编码缓存模式;将服务器端的每个文件都按照固定的比例划分成两部分,定义文件分配参数l1表示各文件由D2D网络内部编码缓存模式负责信息传输的比例大小;每个文件的l1F比特为需要由D2D网络内部编码缓存模式完成的任务,而(1-l1)F比特利用服务器广播编码缓存模式进行预缓存和传输;The implementation plan of parallel transmission encoding cache is as follows: the server transmits information to each user, and users transmit information to each other at the same time, which are called server broadcast encoding cache mode and D2D network internal encoding cache mode respectively; each file on the server side is processed according to a fixed The ratio is divided into two parts, and the file allocation parameter l1 is defined to indicate the proportion of information transmission of each file that is responsible for the information transmission by the D2D network internal encoding cache mode; l1 F bits of each file are tasks that need to be completed by the D2D network internal encoding cache mode. And (1-l1 )F bits use the server broadcast encoding cache mode for pre-caching and transmission;
当D2D网络内部编码缓存模式的传输时长Tu时,有:When the transmission duration of the internal coding cache mode of the D2D network is Tu , there are:
当SMCC部分的传输时长Ts时,有:When the transmission duration of the SMCC part is Ts , there are:
最终总的传输时长DT为:The final total transmission time DT is:
优化问题总结为:The optimization problem is summarized as:
设定每个用户分配给服务器广播编码缓存模式策略的缓存空间大小相等,m=[m1,…,mK]为各用户归一化后的缓存空间大小,设置m1≤…≤mK,则有ms∈[0,m1];每个用户分配给D2D网络内部编码缓存模式策略的缓存空间向量为mu=[m1-ms,…,mK-ms];Set the cache space size allocated to the server broadcast encoding cache mode policy by each user to be equal. m=[m1 ,…,mK ] is the normalized cache space size of each user. Set m1 ≤…≤mK , then there are ms ∈ [0, m1 ]; the cache space vector allocated to each user for the D2D network internal coding cache mode policy is mu = [m1 -ms ,...,mK -ms ];
定义缓存向量a表示在缓存阶段,各文件预先缓存到终端用户的分配策略,缓存向量a中分量aS表示了缓存在用户集合S中的每一个用户的子文件的大小,aS∈[0,1],1≤|S|≤K;Define the cache vector a to represent the allocation strategy for pre-caching each file to the end user in the cache phase. The component aS in the cache vector a represents the size of each user's sub-file cached in the user set S, aS ∈[0 ,1],1≤|S|≤K;
定义传输变量表示在传输阶段,用户j传递给/>中每一个用户的信息/>的大小,/>代表任意用户集合,A[K]/j表示所有用户中排除用户j后的用户集合,/>vj→{i}表示用户j发送给用户i的信息Xj→{i}的大小;Define transfer variables Indicates that during the transmission phase, user j is passed to/> Information about each user/> size,/> Represents any user set, A[K]/j represents the user set excluding user j among all users,/> vj→{i} represents the size of the information Xj→ {i} sent by user j to user i;
定义分配变量表示对于同时存在于用户集合S中的子文件,其大小由aS表示,发送给不同用户集合/>时针对该子文件的分配策略,/>Define allocation variables Indicates that for sub-files that exist in the user set S at the same time, their size is represented by aS and are sent to different user sets/> The allocation strategy for this sub-file,/>
为满足/>的用户集合; To satisfy/> collection of users;
在给定各用户缓存空间大小m=(m1,…,mK)的基础上,总传输时长DT与文件分配参数l1和用户缓存空间分配参数ms有关,建模得到优化变量为文件分配参数l1、用户缓存空间分配参数ms,优化指标为传输时长的优化问题。On the basis of given each user cache space size m = (m1 ,...,mK ), the total transmission time DT is related to the file allocation parameter l1 and the user cache space allocation parameter ms . The optimization variable obtained through modeling is file Allocation parameter l1 , user cache space allocation parameter ms , and the optimization index is the optimization problem of transmission duration.
进一步的,步骤3基于粒子群算法和线性规划的算法如下:Further, step 3 is based on the particle swarm algorithm and linear programming algorithm as follows:
粒子群算法将鸟群行为抽象为数学概念,搜索空间中的每只鸟,称为粒子,鸟到食物的距离抽象为适应度,即为优化函数值,通过适应度来评价解的好坏;在每次迭代中,粒子通过跟踪个体极值pBest和全局极值gBest来更新自己,每个粒子下一次的位置和速度更新策略为:Particle swarm algorithm abstracts the behavior of bird flocks into mathematical concepts. Each bird in the search space is called a particle. The distance between the bird and the food is abstracted into fitness, which is the value of the optimization function. The quality of the solution is evaluated through fitness; In each iteration, the particles update themselves by tracking the individual extreme value pBest and the global extreme value gBest. The next position and velocity update strategy of each particle is:
vi=vi+c1×rand()×(pBesti-xi)+c2×rand()×(gBesti-xi)vi =vi +c1 ×rand()×(pBesti -xi )+c2 ×rand()×(gBesti -xi )
xi=xi+vixi =xi +vi
其中,rand()是介于(0,1)之间的随机数,c1、c2是学习因子;Among them, rand() is a random number between (0,1), and c1 and c2 are learning factors;
利用粒子群算法,把文件分配参数l1和缓存空间分配参数ms视为鸟群的二维坐标位置,在参数初始化时,每个粒子的位置都初始化为二维向量,每一维分别代表优化变量l1和ms,每次迭代时,都会向特定方向改变位置坐标,即在调整文件分配参数l1和缓存空间分配参数ms;在计算粒子的适应度时,即计算优化目标函数DT,每次迭代后得到的传输时长DT视为每只鸟到食物的距离,即适应度,由此时的文件分配参数l1和缓存空间分配参数ms计算得到。Using the particle swarm algorithm, the file allocation parameter l1 and the cache space allocation parameter ms are regarded as the two-dimensional coordinate position of the bird flock. When initializing the parameters, the position of each particle is initialized as a two-dimensional vector, and each dimension represents The optimization variables l1 and ms will change the position coordinates in a specific direction during each iteration, that is, adjusting the file allocation parameter l1 and the cache space allocation parameter ms ; when calculating the fitness of the particles, the optimization objective function is calculated DT, the transmission duration DT obtained after each iteration is regarded as the distance from each bird to the food, that is, the fitness, which is calculated from the file allocation parameter l1 and the cache space allocation parameter ms at this time.
进一步的,基于粒子群算法与线性规划的算法包括如下步骤:Further, the algorithm based on particle swarm optimization and linear programming includes the following steps:
步骤3.1、初始化一群粒子,给定初始的随机位置和速度,即初始化各粒子的文件分配参数l1,缓存空间分配参数ms及v;Step 3.1. Initialize a group of particles, given the initial random position and speed, that is, initialize the file allocation parameter l1 of each particle, and the cache space allocation parameters ms and v;
步骤3.2、对于每个粒子,根据服务器广播编码缓存模式的传输方案得到其Ts,其中服务器广播编码缓存模式为传输阶段由服务器向各用户发送编码信息的传输方案;Step 3.2. For each particle, obtain its Ts according to the transmission scheme of the server broadcast coding cache mode, where the server broadcast coding cache mode is the transmission scheme in which the server sends coding information to each user in the transmission stage;
步骤3.3、根据缓存空间分配参数ms得到各用户用于D2D网络内部编码缓存模式的缓存空间mu=[m1-ms,…,mK-ms];Step 3.3. Obtain the cache space mu used by each user for the D2D network internal coding cache mode according to the cache space allocation parameter ms =[m1 -ms ,...,mK -ms ];
步骤3.4、对于每个粒子,监控mu,直接计算或利用线性规划得到Tu;Step 3.4. For each particle, monitormu and directly calculate or use linear programming to obtain Tu ;
步骤3.5、计算此时每个粒子的适应度DT=max{Tu,Ts};Step 3.5: Calculate the fitness of each particle at this time DT=max{Tu ,Ts };
步骤3.6、评价每个粒子的适应度:Step 3.6. Evaluate the fitness of each particle:
步骤3.6.1、对每个粒子,将其适应度与此时的pBest作比较,若得到更小的适应度,即DT,则以此更新pBest;Step 3.6.1. For each particle, compare its fitness with the pBest at this time. If a smaller fitness, that is, DT, is obtained, update pBest accordingly;
步骤3.6.2、对每个粒子,将其适应度与此时的gBest作比较,若得到更小的适应度,即DT,则以此更新gBest;Step 3.6.2. For each particle, compare its fitness with gBest at this time. If a smaller fitness, that is, DT, is obtained, update gBest accordingly;
步骤3.7、更新粒子的速度和位置;Step 3.7, update the speed and position of the particles;
步骤3.8、判断是否达到最大迭代次数或满足精度,如果满足,则算法结束,输出最优解l1和mu及目标函数值DT;否则,转步骤3.2。Step 3.8: Determine whether the maximum number of iterations is reached or the accuracy is met. If so, the algorithm ends and the optimal solutions l1 and mu and the objective function value DT are output; otherwise, go to step 3.2.
本发明的一种并行传输编码缓存问题的性能优化方案,具有以下优点:A performance optimization solution for the parallel transmission coding cache problem of the present invention has the following advantages:
本发明与服务器广播编码缓存模式(SMCC)和D2D网络内部编码缓存模式(D2DCC)这两种传输方式独立进行相比,减少了传输时长,同时提出了基于粒子群算法和线性规划的优化算法,能够有效减少优化迭代次数,得到最优传输时长,提高算法性能。Compared with the two independent transmission modes of server broadcast coding cache mode (SMCC) and D2D network internal coding cache mode (D2DCC), this invention reduces the transmission time. At the same time, it proposes an optimization algorithm based on particle swarm algorithm and linear programming. It can effectively reduce the number of optimization iterations, obtain the optimal transmission time, and improve algorithm performance.
附图说明Description of the drawings
图1为本发明的一种并行传输编码缓存问题模型示意图;Figure 1 is a schematic diagram of a parallel transmission coding caching problem model of the present invention;
图2为本发明的基于粒子群算法和线性规划的编码缓存优化算法的流程示意图。Figure 2 is a schematic flow chart of the encoding cache optimization algorithm based on particle swarm algorithm and linear programming of the present invention.
具体实施方式Detailed ways
为了更好地了解本发明的目的、结构及功能,下面结合附图,对本发明一种并行传输编码缓存的性能优化方案做进一步详细的描述。In order to better understand the purpose, structure and function of the present invention, a performance optimization solution for parallel transmission coding cache of the present invention will be described in further detail below with reference to the accompanying drawings.
如图2所示,本发明包括以下步骤:As shown in Figure 2, the present invention includes the following steps:
步骤1、如图1所示,服务器,通过一个无差错广播信道连接了K个用户。同时,所有用户之间形成了一个D2D通信网络。在服务器端存储了N个文件,W1,...WN,每个文件具有F个比特。每个用户都有一个缓存空间,不同用户的缓存空间大小不同。针对给定的用户数K、数据库文件数N、服务器广播信道容量Cs、D2D网络信道容量Cu和各用户不同大小缓存空间的编码缓存问题,已知独立的服务器广播编码缓存模式(SMCC)和D2D网络内部编码缓存模式(D2DCC)两种编码缓存模式的实现方案,首先定义文件分配参数l1,其中,l1∈[0,1],表示每个文件被分配给两种传输方式的分配策略,定义用户缓存空间分配参数ms,表示各用户缓存空间如何划分给服务器广播编码缓存模式(SMCC)和D2D网络内部编码缓存模式(D2DCC)。Step 1. As shown in Figure 1, the server connects K users through an error-free broadcast channel. At the same time, a D2D communication network is formed between all users. N files, W1 ,...WN , are stored on the server side, each file having F bits. Each user has a cache space, and different users have different cache space sizes. For the given number of users K, the number of database files N, the server broadcast channel capacity Cs , the D2D network channel capacity Cu and the coding cache problem of different sizes of cache spaces for each user, the independent server broadcast coding cache mode (SMCC) is known and D2D network internal coding caching mode (D2DCC). First, define the file allocation parameter l1 , where l1 ∈ [0, 1] means that each file is allocated to two transmission methods. The allocation strategy defines the user cache space allocation parameter ms , indicating how each user's cache space is divided into the server broadcast coding cache mode (SMCC) and the D2D network internal coding cache mode (D2DCC).
步骤2、根据独立的服务器广播编码缓存模式(SMCC)和D2D网络内部编码缓存模式(D2DCC),并利用定义的l1和ms,建立并行传输编码缓存实现方案。首先,在预缓存阶段,每个文件的l1F比特由D2DCC传输方案中的预缓存策略缓存到各用户内存中,(1-l1)F比特由服务器广播编码缓存模式(SMCC)传输方案中的预缓存策略缓存。每个用户缓存中的ms部分用来存储服务器广播编码缓存模式(SMCC)中由数据库得到的文件内容,每个用户剩余的缓存空间则用来存储D2D网络内部编码缓存模式(D2DCC)中需要预存储的文件内容。在传输阶段,服务器广播编码缓存模式(SMCC)通过广播信道向各用户发送信息的同时,用户之间也可以互相发送信息。因此,可以将服务器广播编码缓存模式(SMCC)和D2D网络内部编码缓存模式(D2DCC)同时进行的并行传输编码缓存问题建模为与参数l1和ms有关的优化问题,优化目标是最小化传输时长。Step 2. Based on the independent server broadcast coding cache mode (SMCC) and the D2D network internal coding cache mode (D2DCC), and using the defined l1 and ms , establish a parallel transmission coding cache implementation plan. First, in the pre-caching stage, the l1 F bits of each file are cached into each user's memory by the pre-caching strategy in the D2DCC transmission scheme, and the (1-l1 ) F bits are cached by the server broadcast coding cache mode (SMCC) transmission scheme Precaching policy cache in . The ms part of each user's cache is used to store the file content obtained from the database in the server broadcast coding cache mode (SMCC), and the remaining cache space of each user is used to store the files needed in the D2D network internal coding cache mode (D2DCC). Prestored file contents. In the transmission phase, while the server broadcast coding cache mode (SMCC) sends information to each user through the broadcast channel, users can also send information to each other. Therefore, the parallel transmission coding cache problem of server broadcast coding cache mode (SMCC) and D2D network internal coding cache mode (D2DCC) simultaneously can be modeled as an optimization problem related to parameters l1 and ms , and the optimization goal is to minimize Transfer duration.
建模得到的与文件分配参数l1和用户缓存空间分配参数ms有关的优化问题描述如下:并行传输模式为:服务器向各用户传递信息,同时用户之间相互传递信息,分别被称为服务器广播编码缓存模式(SMCC)和D2D网络内部编码缓存模式(D2DCC);将服务器端的每个文件都按照固定的比例划分成两部分,定义l1表示各文件由D2DCC模式负责信息传输的比例大小;每个文件的l1F比特为需要由D2DCC模式完成的任务,而(1-l1)F比特利用SMCC模式进行预缓存和传输;The optimization problem related to the file allocation parameter l1 and the user cache space allocation parameter ms obtained by modeling is described as follows: The parallel transmission mode is: the server transmits information to each user, and at the same time, users transmit information to each other, which are called servers respectively. Broadcast Coding Cache Mode (SMCC) and D2D Network Internal Coding Cache Mode (D2DCC); each file on the server side is divided into two parts according to a fixed ratio, and definition l1 indicates the proportion of each file that is responsible for information transmission in the D2DCC mode; The l1 F bits of each file are tasks that need to be completed by the D2DCC mode, and the (1-l1 )F bits use the SMCC mode for pre-caching and transmission;
当D2DCC部分的传输时长Tu时,有:When the transmission duration of the D2DCC part isTu , there are:
当SMCC部分的传输时长Ts时,有:When the transmission duration of the SMCC part is Ts , there are:
最终总的传输时长DT为:The final total transmission time DT is:
优化问题为:The optimization problem is:
其中,定义l1表示各文件由D2DCC模式负责信息传输的比例大小,l1∈[0,1],l1描述了可并行传输的编码缓存网络中,每个文件如何分到以上两种传输模式的分配策略。Among them, the definition l1 represents the proportion of each file that is responsible for information transmission in the D2DCC mode, l1 ∈ [0,1], l1 describes how each file is divided into the above two types of transmission in the encoding cache network that can be transmitted in parallel Mode allocation strategy.
定义ms表示进行SMCC通信策略时,各用户预留给SMCC通信模式的缓存空间大小,设定每个用户分配给SMCC策略的缓存空间大小相等。我们已知m=[m1,…,mK]为各用户归一化后的缓存空间大小,不失一般性地,我们假设m1≤…≤mK,则有ms∈[0,m1]。相应地,每个用户分配给D2DCC策略的缓存空间向量为mu=[m1-ms,…,mK-ms]。Define ms to represent the size of the cache space reserved for each user for the SMCC communication mode when performing the SMCC communication policy. Set the size of the cache space allocated to the SMCC policy for each user to be equal. We know that m=[m1 ,…,mK ] is the normalized cache space size of each user. Without loss of generality, we assume that m1 ≤…≤mK , then ms ∈[0, m1 ]. Correspondingly, the cache space vector allocated to the D2DCC policy by each user is mu =[m1 -ms ,...,mK -ms ].
定义缓存向量a表示在缓存阶段,各文件预先缓存到终端用户的分配策略,a中分量aS表示了缓存在用户集合S中的每一个用户的子文件的大小,aS∈[0,1],1≤|S|≤K。Define the cache vector a to represent the allocation strategy for pre-caching each file to the end user in the cache phase. The component aS in a represents the size of each user's sub-file cached in the user set S, aS ∈ [0,1 ], 1≤|S|≤K.
定义传输变量表示在传输阶段,用户j传递给/>中每一个用户的信息Xj→τ的大小,/>代表任意用户集合,A[K]/j表示所有用户中排除用户j后的用户集合,/>vj→{i}表示用户j发送给用户i的信息Xj→{i}的大小。Define transfer variables Indicates that during the transmission phase, user j is passed to/> The size of each user’s information Xj→τ ,/> Represents any user set, A[K]/j represents the user set excluding user j among all users,/> vj→{i} represents the size of the information Xj→{i } sent by user j to user i.
定义分配变量表示对于同时存在于用户集合S中的子文件,其大小由aS表示,发送给不同用户集合/>时针对该子文件的分配策略,/>Define allocation variables Indicates that for sub-files that exist in the user set S at the same time, their size is represented by aS and are sent to different user sets/> The allocation strategy for this sub-file,/>
为满足/>的用户集合。 To satisfy/> collection of users.
在给定各用户缓存空间大小m=(m1,...,mK)的基础上,DT与l1和ms有关,建模得到优化变量为文件分配参数l1、用户缓存空间分配参数ms,优化指标为传输时长的优化问题。On the basis of given each user cache space size m = (m1 ,...,mK ), DT is related to l1 and ms , and the optimization variables obtained through modeling are file allocation parameter l1 and user cache space allocation Parameter ms , the optimization index is the optimization problem of transmission duration.
步骤3、为解决此优化问题,使用基于粒子群算法和线性规划的算法进行优化。把文件分配参数l1和缓存空间分配参数ms视为鸟群的二维坐标位置,初始化多个粒子坐标,并得到每个粒子的初始适应值,即利用粒子的l1和ms得到的传输时长DT。粒子群算法的思想就是利用多个粒子每一次的位置更新,逐渐找到具有最优适应值的粒子。Step 3. In order to solve this optimization problem, an algorithm based on particle swarm optimization and linear programming is used for optimization. Treat the file allocation parameter l1 and cache space allocation parameter ms as the two-dimensional coordinate position of the bird flock, initialize multiple particle coordinates, and obtain the initial fitness value of each particle, which is obtained by using the l1 and ms of the particle. Transmission duration DT. The idea of the particle swarm algorithm is to use each position update of multiple particles to gradually find the particles with the optimal fitness value.
在粒子群算法每一次迭代时,根据粒子位置更新策略会得到每个粒子当前l1和ms,在此基础上可以得到由D2D网络负责传输的文件大小和D2D网络传输模式可使用用户缓存空间大小。In each iteration of the particle swarm algorithm, the current l1 and ms of each particle will be obtained according to the particle position update strategy. On this basis, the file size transmitted by the D2D network and the user cache space available for the D2D network transmission mode can be obtained. size.
若发现此时D2DCC模式可使用的各用户缓存大小不满足上述特定条件,则求解D2D网络编码缓存的线性规划问题,从而得到在此次迭代中,当前D2D网络传输部分的传输时长Tu,并随之得到当前的适应度DT。根据极值更新规则,更新个体极值和全局极值,并进入下一次迭代。粒子群算法多次迭代后,实现对传输时长DT的优化。If it is found that the cache size of each user that can be used in the D2DCC mode at this time does not meet the above specific conditions, the linear programming problem of the D2D network coding cache is solved to obtain the transmission duration Tu of the current D2D network transmission part in this iteration, and Then the current fitness DT is obtained. According to the extreme value update rules, update the individual extreme value and global extreme value, and enter the next iteration. After multiple iterations of the particle swarm algorithm, the optimization of the transmission duration DT is achieved.
基于粒子群算法和线性规划的算法具体如下:The algorithm based on particle swarm optimization and linear programming is as follows:
粒子群算法PSO将鸟群行为抽象为数学概念,搜索空间中的每只鸟,称为粒子,鸟到食物的距离抽象为适应度,即为优化函数值,通过适应度来评价解的好坏;在每次迭代中,粒子通过跟踪个体极值pBest和全局极值gBest来更新自己,每个粒子下一次的位置和速度更新策略为:Particle swarm algorithm (PSO) abstracts the behavior of bird flocks into mathematical concepts. Each bird in the search space is called a particle. The distance between the bird and the food is abstracted into fitness, which is the value of the optimization function. The quality of the solution is evaluated through fitness. ; In each iteration, the particles update themselves by tracking the individual extreme value pBest and the global extreme value gBest. The next position and speed update strategy of each particle is:
vi=vi+c1×rand()×(pBesti-xi)+c2×rand()×(gBesti-xi)vi =vi +c1 ×rand()×(pBesti -xi )+c2 ×rand()×(gBesti -xi )
xi=xi+vixi =xi +vi
其中,rand()是介于(0,1)之间的随机数,c1、c2是学习因子。Among them, rand() is a random number between (0,1), and c1 and c2 are learning factors.
利用PSO,把文件分配参数l1和缓存空间分配参数ms视为鸟群的二维坐标位置,在参数初始化时,每个粒子的位置都初始化为二维向量,每一维分别代表优化变量l1和ms,每次迭代时,都会向特定方向改变位置坐标,即在调整变量l1和ms。在计算粒子的适应度时,即计算优化目标函数DT,每次迭代后得到的传输时长DT视为每只鸟到食物的距离,即适应度,由此时的l1和ms计算得到。Using PSO, the file allocation parameter l1 and cache space allocation parameter ms are regarded as the two-dimensional coordinate position of the bird flock. During parameter initialization, the position of each particle is initialized as a two-dimensional vector, and each dimension represents the optimization variable. l1 and ms , each iteration will change the position coordinates in a specific direction, that is, adjusting the variables l1 and ms . When calculating the fitness of particles, that is, calculating the optimization objective function DT, the transmission duration DT obtained after each iteration is regarded as the distance from each bird to the food, that is, the fitness, which is calculated from l1 and ms at this time.
当应用粒子群算法优化并行传输编码缓存问题的传输时长时,每一次迭代得到当前l1和ms,在此基础上可以得到D2D网络传输部分的文件大小和可使用用户缓存空间大小,并求解D2D网络编码缓存的线性规划问题,从而得到当前D2D网络传输部分的传输时长Tu,并随之得到当前的适应度DT。When applying the particle swarm algorithm to optimize the transmission duration of the parallel transmission coding cache problem, each iteration obtains the current l1 and ms . On this basis, the file size and available user cache space of the D2D network transmission part can be obtained and solved Linear programming problem of D2D network coding cache, thereby obtaining the transmission durationTu of the current D2D network transmission part, and subsequently obtaining the current fitness DT.
同时,当D2D网络编码缓存部分,在已知K、N、m且m1≤…≤mK时,当各用户的缓存空间满足特定条件时,其传输延迟无需利用线性规划得到,具体有:At the same time, when the D2D network coding cache part is known K, N, m and m1 ≤...≤mK , when the cache space of each user meets specific conditions, its transmission delay does not need to be obtained by linear programming, specifically:
(1)当时,传输延迟R(m)为:(1) When When , the transmission delay R(m) is:
其中,in,
(2)当时,传输延迟R(m)为:(2) When When , the transmission delay R(m) is:
其中,l∈Z,l∈[K-2]使得Among them, l∈Z, l∈[K-2] makes
(3)当时,传输延迟R(m)为:(3) When When , the transmission delay R(m) is:
R(m)=1-m1R(m)=1-m1
其中,in,
因此,在粒子群每一次迭代开始时,增加监控步骤,判断当前参数ms所决定的D2D网络传输部分是否需要通过线性规划来得到Tu。Therefore, at the beginning of each iteration of the particle swarm, a monitoring step is added to determine whether the D2D network transmission part determined by the current parameter ms needs to be obtained through linear programming to obtain Tu .
基于粒子群算法与线性规划的算法包括如下步骤:The algorithm based on particle swarm optimization and linear programming includes the following steps:
步骤3.1、初始化一群粒子,给定初始的随机位置和速度,即初始化各粒子的l1,ms及v;Step 3.1. Initialize a group of particles, given the initial random position and velocity, that is, initialize l1 , ms and v of each particle;
步骤3.2、对于每个粒子,根据服务器广播编码缓存模式(SMCC)的传输方案得到其Ts,其中服务器广播编码缓存模式(SMCC)为传输阶段由服务器向各用户发送编码信息的传输方案;Step 3.2. For each particle, obtain its Ts according to the transmission scheme of the server broadcast coding cache mode (SMCC), where the server broadcast coding cache mode (SMCC) is the transmission scheme in which the server sends coding information to each user in the transmission stage;
步骤3.3、根据ms得到各用户用于D2DCC模式的缓存空间mu=[m1-ms,…,mK-ms];Step 3.3. Obtain the cache space mu used by each user in D2DCC mode according to ms =[m1 -ms ,...,mK -ms ];
步骤3.4、对于每个粒子,监控mu,直接计算或利用线性规划得到Tu;Step 3.4. For each particle, monitormu and directly calculate or use linear programming to obtain Tu ;
步骤3.5、计算此时每个粒子的适应度DT=max{Tu,Ts};Step 3.5: Calculate the fitness of each particle at this time DT=max{Tu ,Ts };
步骤3.6、评价每个粒子的适应度:Step 3.6. Evaluate the fitness of each particle:
步骤3.6.1、对每个粒子,将其适应度与此时的pBest作比较,如果得到更小的适应度,即DT,则以此更新pBest;Step 3.6.1. For each particle, compare its fitness with the pBest at this time. If a smaller fitness, that is, DT, is obtained, update pBest accordingly;
步骤3.6.2、对每个粒子,将其适应度与此时的gBest作比较,如果得到更小的适应度,即DT,则以此更新gBest;Step 3.6.2. For each particle, compare its fitness with gBest at this time. If a smaller fitness, i.e. DT, is obtained, update gBest accordingly;
步骤3.7、更新粒子的速度和位置;Step 3.7, update the speed and position of the particles;
步骤3.8、判断是否达到最大迭代次数或满足精度,如果满足,则算法结束,输出最优解l1和mu及目标函数值DT;否则,转步骤3.2。Step 3.8: Determine whether the maximum number of iterations is reached or the accuracy is met. If so, the algorithm ends and the optimal solutions l1 and mu and the objective function value DT are output; otherwise, go to step 3.2.
可以理解,本发明是通过一些实施例进行描述的,本领域技术人员知悉的,在不脱离本发明的精神和范围的情况下,可以对这些特征和实施例进行各种改变或等效替换。另外,在本发明的教导下,可以对这些特征和实施例进行修改以适应具体的情况及材料而不会脱离本发明的精神和范围。因此,本发明不受此处所公开的具体实施例的限制,所有落入本申请的权利要求范围内的实施例都属于本发明所保护的范围内。It is understood that the present invention has been described through some embodiments. Those skilled in the art know that various changes or equivalent substitutions can be made to these features and embodiments without departing from the spirit and scope of the present invention. In addition, the features and embodiments may be modified to adapt a particular situation and material to the teachings of the invention without departing from the spirit and scope of the invention. Therefore, the present invention is not limited to the specific embodiments disclosed here, and all embodiments falling within the scope of the claims of the present application are within the scope of protection of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210384838.XACN114760598B (en) | 2022-04-13 | 2022-04-13 | A performance optimization scheme for parallel transmission encoding cache |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210384838.XACN114760598B (en) | 2022-04-13 | 2022-04-13 | A performance optimization scheme for parallel transmission encoding cache |
| Publication Number | Publication Date |
|---|---|
| CN114760598A CN114760598A (en) | 2022-07-15 |
| CN114760598Btrue CN114760598B (en) | 2024-01-30 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210384838.XAActiveCN114760598B (en) | 2022-04-13 | 2022-04-13 | A performance optimization scheme for parallel transmission encoding cache |
| Country | Link |
|---|---|
| CN (1) | CN114760598B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117336695B (en)* | 2023-10-23 | 2024-09-13 | 中山大学·深圳 | Coding caching method based on coding pre-storage under adjacent user cooperation scene |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110035415A (en)* | 2019-04-03 | 2019-07-19 | 西安交通大学 | A kind of D2D network-caching method for down loading of latency model |
| CN111741495A (en)* | 2020-06-22 | 2020-10-02 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Design method of high-efficiency encoding cache content placement scheme in heterogeneous networks |
| CN113259056A (en)* | 2021-06-16 | 2021-08-13 | 上海交通大学 | Code cache transmission method based on cache channel joint coding |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB201114079D0 (en)* | 2011-06-13 | 2011-09-28 | Neul Ltd | Mobile base station |
| US10394812B2 (en)* | 2016-12-06 | 2019-08-27 | International Business Machines Corporation | Case statement optimization |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110035415A (en)* | 2019-04-03 | 2019-07-19 | 西安交通大学 | A kind of D2D network-caching method for down loading of latency model |
| CN111741495A (en)* | 2020-06-22 | 2020-10-02 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Design method of high-efficiency encoding cache content placement scheme in heterogeneous networks |
| CN113259056A (en)* | 2021-06-16 | 2021-08-13 | 上海交通大学 | Code cache transmission method based on cache channel joint coding |
| Publication number | Publication date |
|---|---|
| CN114760598A (en) | 2022-07-15 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111031102B (en) | A cacheable task migration method in a multi-user, multi-task mobile edge computing system | |
| CN109639760B (en) | A caching strategy method in D2D network based on deep reinforcement learning | |
| WO2024174426A1 (en) | Task offloading and resource allocation method based on mobile edge computing | |
| CN116782296A (en) | Digital twinning-based internet-of-vehicles edge computing and unloading multi-objective decision method | |
| CN111459662B (en) | Migration management method, device and storage medium in mobile edge computing | |
| CN114340016A (en) | A method and system for offloading and distributing power grid edge computing | |
| CN114553661A (en) | Mobile user equipment clustering training method for wireless federal learning | |
| CN113726894B (en) | Multi-vehicle application computing and unloading method and terminal based on deep reinforcement learning | |
| CN117858109A (en) | User association, task unloading and resource allocation optimization method based on digital twin | |
| CN114760598B (en) | A performance optimization scheme for parallel transmission encoding cache | |
| CN111465057B (en) | Edge caching method and device based on reinforcement learning and electronic equipment | |
| CN112597388A (en) | Cache-enabled D2D communication joint recommendation and caching method | |
| CN118690873A (en) | Heterogeneous method and system for federated learning client resources for edge intelligence | |
| CN113992770B (en) | Policy-based federal reinforcement learning collaborative caching method in fog wireless access network | |
| CN116367231A (en) | Edge computing Internet of vehicles resource management joint optimization method based on DDPG algorithm | |
| CN104301305A (en) | Method and forwarding terminal for interest packet forwarding under information center network | |
| CN116405493A (en) | A MOGWO strategy-based edge cloud collaborative task offloading method | |
| CN117857378A (en) | Mobile edge caching and recommending method based on federal reinforcement learning | |
| CN115809147A (en) | Multi-edge cooperative cache scheduling optimization method, system and model training method | |
| CN114928826A (en) | Two-stage optimization method, controller and decision method for software-defined vehicle-mounted task unloading and resource allocation | |
| CN117787440A (en) | A multi-stage federated learning method for Internet of Vehicles oriented to non-IID data | |
| CN114860345B (en) | Calculation unloading method based on cache assistance in smart home scene | |
| CN118070873A (en) | A method for deploying edge digital twins based on transfer learning | |
| CN119212105A (en) | A task perception and intelligent scheduling method for converged networks | |
| CN116132380A (en) | Resource allocation method, nonvolatile storage medium and electronic device |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |