



技术领域technical field
本发明涉及服务调度领域,特别涉及一种边云协同的微服务调度方法、 装置及设备。The present invention relates to the field of service scheduling, and in particular, to a method, device and device for micro-service scheduling with edge-cloud collaboration.
背景技术Background technique
随着物联网和5G网络的快速发展,出现了种类多样的边缘应用(如车 联网、增加/虚拟现实等),这些新兴的边缘应用对于时延的要求极高。然 而,传统的软件系统大多采用单体架构,即:将应用服务程序的所有功能 模块封装在一个jar包中,然后整体部署在一台云服务器上,采用数据集 中化管理模式,为移动用户提供计算服务。这些用户端产生的海量边缘数 据若是全部上传至云服务器上处理,势必会占用大量的网络连接,造成严 重的网络拥塞。此外,由于云服务器通常距离数据源位置较远,必然会产 生较高的传输时延。较高的时延会给用户带来不好的服务体验,甚至会导 致一些时延敏感型应用无法正常工作。边缘计算的出现为解决这一问题提 供了有效的思路,在边缘环境下,服务可以迁移到距离数据源更近的边缘 服务器上执行,从而极大地降低服务执行时延。With the rapid development of IoT and 5G networks, various edge applications (such as Internet of Vehicles, augmented/virtual reality, etc.) have emerged, and these emerging edge applications have extremely high latency requirements. However, most of the traditional software systems adopt a monolithic architecture, that is, all functional modules of the application service program are encapsulated in a jar package, and then deployed on a cloud server as a whole, using a centralized data management model to provide mobile users with computing services. If the massive edge data generated by these clients is all uploaded to the cloud server for processing, it will inevitably occupy a large number of network connections and cause serious network congestion. In addition, because the cloud server is usually far away from the data source, it will inevitably generate high transmission delay. Higher latency will bring bad service experience to users, and even cause some latency-sensitive applications to fail to work properly. The emergence of edge computing provides an effective idea for solving this problem. In an edge environment, services can be migrated to edge servers that are closer to the data source for execution, thereby greatly reducing service execution latency.
然而,在过去有关边缘计算服务调度的研究中,往往采用的是一种粗 粒度的服务调度方法,这种粗粒度的调度方法以整个软件系统的调度为对 象,不会根据其实际的业务功能将其进行划分,而是直接将整个应用服务 调度到某一边缘服务器上。但是由于边缘服务器的计算资源有限,在边缘 服务器上部署全部的服务会加重边缘服务器的计算压力,容易造成服务器 过载,影响服务质量。However, in the past research on edge computing service scheduling, a coarse-grained service scheduling method is often used. This coarse-grained scheduling method takes the scheduling of the entire software system as the object, and does not depend on its actual business functions. It is divided, but the entire application service is directly scheduled to an edge server. However, due to the limited computing resources of the edge server, deploying all services on the edge server will increase the computing pressure of the edge server, which will easily cause the server to be overloaded and affect the quality of service.
有鉴于此,提出本申请。In view of this, this application is made.
发明内容SUMMARY OF THE INVENTION
本发明公开了一种边云协同的微服务调度方法、装置及设备,旨在满 足用户的低时延需求同时避免边缘服务器过载。The invention discloses a micro-service scheduling method, device and equipment for edge-cloud coordination, aiming at meeting the low-latency requirements of users and avoiding overloading of edge servers.
本发明第一实施例提供了一种边云协同的微服务调度方法,包括:The first embodiment of the present invention provides a microservice scheduling method for edge-cloud collaboration, including:
S1,根据实际业务,将复杂应用拆解为不同的服务模块,构建服务模块 的简化模型;S1, according to the actual business, disassemble the complex application into different service modules, and build a simplified model of the service module;
S2,构建备选服务实例集,并根据所述简化模型的粒子总数初始化该边 缘服务器集群提供服务的虚拟机数量;S2, build an alternative service instance set, and initialize the number of virtual machines that this edge server cluster provides services according to the total number of particles of the simplified model;
S3,预定义粒子的适应度模型、约束条件、服务器集群的搜索范围以及 粒子的迁移策略,并对所述适应度模型中的各个因子进行初始化;S3, the fitness model of predefined particle, constraint condition, the search range of server cluster and the migration strategy of particle, and each factor in described fitness model is initialized;
S4,通过粒子的适应度模型计算每个粒子的适应度值,根据粒子的适 应度值更新粒子个体最优解以及粒子群体最优解,根据所述粒子个体最优 解以及所述粒子群体最优解更新粒子下一时刻的速度和位置;S4, calculate the fitness value of each particle through the fitness model of the particle, update the optimal solution of the particle individual and the optimal solution of the particle group according to the fitness value of the particle, and update the optimal solution of the individual particle and the optimal solution of the particle group according to the optimal solution of the individual particle and the optimal solution of the particle group. The optimal solution updates the velocity and position of the particle at the next moment;
S5,获取当前粒子的总时延,并判断其是否小于等于服务截止时延约束, 若是,进入步骤S6,若否,则重新初始化当前粒子的位置,然后返回步骤4;S5, obtain the total delay of the current particle, and determine whether it is less than or equal to the service deadline delay constraint, if so, go to step S6, if not, re-initialize the position of the current particle, and then return to
S6,在判断到粒子当前位置与全局最优粒子的距离小于设定的位置因 子且更新后的速度小于设定的速度因子,保留更新后粒子的速度和位置,并 判断当前迭代次数是否小于等于最大迭代次数,若是,返回步骤4,若否,输 出结果。S6, after judging that the distance between the current position of the particle and the global optimal particle is less than the set position factor and the updated speed is less than the set speed factor, keep the speed and position of the updated particle, and judge whether the current number of iterations is less than or equal to The maximum number of iterations, if yes, go back to
优选地,所述适应度模型的因子包括惯性因子、个体学习因子、群体 学习因子、个体影响因子、群体影响因子、个体最优位置、群体最优位置、 粒子位置、粒子速度、服务截止时延约束、以及最大迭代次数。Preferably, the factors of the fitness model include inertia factor, individual learning factor, group learning factor, individual influence factor, group influence factor, individual optimal position, group optimal position, particle position, particle speed, service deadline delay constraints, and the maximum number of iterations.
优选地,所述粒子的适应度模型为:Preferably, the fitness model of the particle is:
其中,Fitness(n,f)为粒子的适应度函数,Ttotal(n,f)为总系统时延,AU(s) 为边缘服务器集群的平均资源利用率,s={1,2,3...Q}为边缘服务器集群的搜 索范围,Tddl为服务截止时延约束,f为服务模块序号,n为该服务模块中 粒子的序号,α和β分别为Ttotal(n,f)和AU(s)的权重因子。Among them, Fitness(n,f) is the fitness function of the particle, Ttotal (n,f) is the total system delay, AU(s) is the average resource utilization of the edge server cluster, s={1,2,3 ...Q} is the search range of the edge server cluster, Tddl is the service deadline delay constraint, f is the serial number of the service module, n is the serial number of the particle in the service module, α and β are Ttotal (n,f) and AU(s) weighting factors.
优选地,所述根据粒子的适应度值更新粒子个体最优解以及粒子群体 最优解具体为:Preferably, the updating of the optimal solution of individual particles and the optimal solution of particle groups according to the fitness value of the particles is specifically:
所述粒子群体最优解gbestt(n,f)取群体最优时延值gbestt(n,f)T和群体 最优资源利用率值gbestt(n,f)AU两者的平均值,即:The particle swarm optimal solution gbestt (n,f) takes the average value of the group optimal time delay value gbestt (n, f)T and the group optimal resource utilization value gbestt (n, f)AU ,which is:
gbestt(n,f)=Average(gbestt(n,f)T,gbestt(n,f)AU)gbestt (n,f)=Average(gbestt (n,f)T ,gbestt (n,f)AU )
粒子个体最优解的选取根据距离因子和Dg的计算结果 确定,计算公式如下:particle individual optimal solution is selected according to the distance factor The calculation results of and Dg are determined, and the calculation formula is as follows:
Dg=Dis(gbestt(n,f)T,gbestt(n,f)AU)=|gbestt(n,f)T-gbestt(n,f)AU|Dg =Dis(gbestt (n,f)T ,gbestt (n,f)AU )=|gbestt (n,f)T -gbestt (n,f)AU |
其中,表示个体优化目标差异程度,Dg表示群体优化目标差异程度,pbestt(n,f)T为个体最优时延值时粒子所在位置,pbestt(n,f)AU为个体最优资 源利用率值时粒子所在位置,Dis表示取两个位置之间的直线距离;in, Indicates the degree of individual optimization target difference, Dg represents the group optimization target difference degree, pbestt (n,f)T is the position of the particle at the individual optimal delay value, pbestt (n, f)AU is the individual optimal resource utilization When the rate value is the position of the particle, Dis represents the straight-line distance between the two positions;
当时,此时,新一轮迭代的pbestt(n,f)在pbestt(n,f)T和pbestt(n,f)AU中随机选取,选取公式如下:when , at this time, pbestt (n,f) of a new round of iteration is randomly selected from pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=RandSelect{pbestt(n,f)T,pbestt(n,f)AU};pbestt (n,f)=RandSelect{pbestt (n,f)T ,pbestt (n,f)AU };
当时,此时,新一轮迭代的pbestt(n,f)取pbestt(n,f)T和pbestt(n,f)AU的平均值,选取公式如下:when , at this time, the pbestt (n,f) of the new round of iteration takes the average value of pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=Average{pbestt(n,f)T,pbestt(n,f)AU}。pbestt (n,f)=Average{pbestt (n,f)T ,pbestt (n,f)AU }.
优选地,所述根据所述粒子个体最优解以及所述粒子群体最优解更新 粒子下一时刻的速度和位置,具体为:Preferably, the speed and position of the particle at the next moment are updated according to the individual optimal solution of the particle and the optimal solution of the particle group, specifically:
根据更新后的pbestt(n,f)及gbestt(n,f)对粒子下一时刻的速度以及下一 时刻的位置更新的公式如下:According to the updated pbestt (n, f) and gbestt (n, f), the formula for updating the particle's velocity at the next moment and the position at the next moment is as follows:
V(t+1)(n,f)=θVt(n,f)+k1r1(pbestt(n,f)-Lt(n,f))+k2r2(gbestt(n,f)-Lt(n,f)) (1)V(t+1) (n,f)=θVt (n,f)+k1 r1 (pbestt (n,f)-Lt (n,f))+k2 r2 (gbestt ( n,f)-Lt (n,f)) (1)
Lt+1(n,f)=Lt(n,f)+Vt+1(n,f) (2)Lt+1 (n,f)=Lt (n,f)+Vt+1 (n,f) (2)
其中,Vt+1(n,f)为粒子下一时刻的速度,θ为惯性因子,Vt(n,f)为粒子 当前时刻的速度,k1为个体随机影响因子、k2为群体随机影响因子、r1为个 体常数学习因子、r2为群体常数学习因子,Lt(n,f)为粒子当前所在位置, Lt+1(n,f)为粒子下一时刻所在位置。Among them, Vt+1 (n, f) is the velocity of the particle at the next moment, θ is the inertia factor, Vt (n, f) is the velocity of the particle at the current moment, k1 is the individual random influence factor, and k2 is the group Random influence factor, r1 is the individual constant learning factor, r2 is the group constant learning factor, Lt (n, f) is the current position of the particle, and Lt+1 (n, f) is the position of the particle at the next moment.
本发明第二实施例提供了一种边云协同的微服务调度装置,包括:The second embodiment of the present invention provides a micro-service scheduling device for edge-cloud coordination, including:
简化模型构建模块,用于根据实际业务,将复杂应用拆解为不同的服 务模块,构建服务模块的简化模型;The simplified model building module is used to disassemble complex applications into different service modules according to actual business, and build a simplified model of service modules;
第一初始化模块,用于构建备选服务实例集,并根据所述简化模型的 粒子总数初始化该边缘服务器集群提供服务的虚拟机数量;The first initialization module is used to construct a set of candidate service instances, and initialize the number of virtual machines that the edge server cluster provides services according to the total number of particles in the simplified model;
第二初始化模块,用于预定义粒子的适应度模型、约束条件、服务器 集群的搜索范围以及粒子的迁移策略,并对所述适应度模型中的各个因子 进行初始化;The second initialization module is used to predefine the fitness model of the particle, the constraint condition, the search range of the server cluster and the migration strategy of the particle, and initialize each factor in the fitness model;
更新模块,用于通过粒子的适应度模型计算每个粒子的适应度值,根 据粒子的适应度值更新粒子个体最优解以及粒子群体最优解,根据所述粒 子个体最优解以及所述粒子群体最优解更新粒子下一时刻的速度和位置;The update module is used to calculate the fitness value of each particle through the fitness model of the particle, update the optimal solution of the individual particle and the optimal solution of the particle group according to the fitness value of the particle, and update the optimal solution of the individual particle and the optimal solution of the particle group according to the fitness value of the particle. The optimal solution of the particle swarm updates the velocity and position of the particle at the next moment;
总时延获取模块,用于获取当前粒子的总时延,并判断其是否小于等 于服务截止时延约束,若是,进入步骤S6,若否,则重新初始化当前粒子的 位置,并通知更新模块;The total delay obtaining module is used to obtain the total delay of the current particle, and judge whether it is less than or equal to the service deadline delay constraint, if so, enter step S6, if not, then re-initialize the position of the current particle, and notify the update module;
判断模块,用于在判断到粒子当前位置与全局最优粒子的距离小于设 定的位置因子且更新后的速度小于设定的速度因子,保留更新后粒子的速 度和位置,并判断当前迭代次数是否小于等于最大迭代次数,若是,通知更 新模块,若否,输出结果。Judgment module, used for judging that the distance between the current position of the particle and the global optimal particle is less than the set position factor and the updated speed is less than the set speed factor, retain the speed and position of the updated particle, and judge the current number of iterations Whether it is less than or equal to the maximum number of iterations, if so, notify the update module, if not, output the result.
优选地,所述粒子的适应度模型为:Preferably, the fitness model of the particle is:
其中,Fitness(n,f)为粒子的适应度函数,Ttotal(n,f)为总系统时延,AU(s) 为边缘服务器集群的平均资源利用率,s={1,2,3...Q}为边缘服务器集群的搜 索范围,Tddl为服务截止时延约束,f为服务模块序号,n为该服务模块中 粒子的序号,α和β分别为Ttotal(n,f)和AU(s)的权重因子。Among them, Fitness(n,f) is the fitness function of the particle, Ttotal (n,f) is the total system delay, AU(s) is the average resource utilization of the edge server cluster, s={1,2,3 ...Q} is the search range of the edge server cluster, Tddl is the service deadline delay constraint, f is the serial number of the service module, n is the serial number of the particle in the service module, α and β are Ttotal (n,f) and AU(s) weighting factors.
优选地,所述更新模块具体用于:Preferably, the update module is specifically used for:
所述粒子群体最优解gbestt(n,f)取群体最优时延值gbestt(n,f)T和群体 最优资源利用率值gbestt(n,f)AU两者的平均值,即:The particle swarm optimal solution gbestt (n,f) takes the average value of the group optimal time delay value gbestt (n, f)T and the group optimal resource utilization value gbestt (n, f)AU ,which is:
gbestt(n,f)=Average(gbestt(n,f)T,gbestt(n,f)AU)gbestt (n,f)=Average(gbestt (n,f)T ,gbestt (n,f)AU )
粒子个体最优解的选取根据距离因子和Dg的计算结果 确定,计算公式如下:particle individual optimal solution is selected according to the distance factor The calculation results of and Dg are determined, and the calculation formula is as follows:
Dg=Dis(gbestt(n,f)T,gbestt(n,f)AU)=|gbestt(n,f)T-gbestt(n,f)AU|Dg =Dis(gbestt (n,f)T ,gbestt (n,f)AU )=|gbestt (n,f)T -gbestt (n,f)AU |
其中,表示个体优化目标差异程度,Dg表示群体优化目标差异程度,pbestt(n,f)T为个体最优时延值时粒子所在位置,pbestt(n,f)AU为个体最优资 源利用率值时粒子所在位置,Dis表示取两个位置之间的直线距离;in, Indicates the degree of individual optimization target difference, Dg represents the group optimization target difference degree, pbestt (n,f)T is the position of the particle at the individual optimal delay value, pbestt (n, f)AU is the individual optimal resource utilization When the rate value is the position of the particle, Dis represents the straight-line distance between the two positions;
当时,此时,新一轮迭代的pbestt(n,f)在pbestt(n,f)T和pbestt(n,f)AU中随机选取,选取公式如下:when , at this time, pbestt (n,f) of a new round of iteration is randomly selected from pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=RandSelect{pbestt(n,f)T,pbestt(n,f)AU};pbestt (n,f)=RandSelect{pbestt (n,f)T ,pbestt (n,f)AU };
当时,此时,新一轮迭代的pbestt(n,f)取pbestt(n,f)T和pbestt(n,f)AU的平均值,选取公式如下:when , at this time, the pbestt (n,f) of the new round of iteration takes the average value of pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=Average{pbestt(n,f)T,pbestt(n,f)AU}。pbestt (n,f)=Average{pbestt (n,f)T ,pbestt (n,f)AU }.
优选地,所述更新模块具体用于:Preferably, the update module is specifically used for:
根据更新后的pbestt(n,f)及gbestt(n,f)对粒子下一时刻的速度以及下一 时刻的位置更新的公式如下:According to the updated pbestt (n, f) and gbestt (n, f), the formula for updating the particle's velocity at the next moment and the position at the next moment is as follows:
V(t+1)(n,f)=θVt(n,f)+k1r1(pbestt(n,f)-Lt(n,f))+k2r2(gbestt(n,f)-Lt(n,f)) (1)V(t+1) (n,f)=θVt (n,f)+k1 r1 (pbestt (n,f)-Lt (n,f))+k2 r2 (gbestt ( n,f)-Lt (n,f)) (1)
Lt+1(n,f)=Lt(n,f)+Vt+1(n,f) (2)Lt+1 (n,f)=Lt (n,f)+Vt+1 (n,f) (2)
其中,Vt+1(n,f)为粒子下一时刻的速度,θ为惯性因子,Vt(n,f)为粒子 当前时刻的速度,k1为个体随机影响因子、k2为群体随机影响因子、r1为个 体常数学习因子、r2为群体常数学习因子,Lt(n,f)为粒子当前所在位置, Lt+1(n,f)为粒子下一时刻所在位置。Among them, Vt+1 (n, f) is the velocity of the particle at the next moment, θ is the inertia factor, Vt (n, f) is the velocity of the particle at the current moment, k1 is the individual random influence factor, and k2 is the group Random influence factor, r1 is the individual constant learning factor, r2 is the group constant learning factor, Lt (n, f) is the current position of the particle, and Lt+1 (n, f) is the position of the particle at the next moment.
本发明第三实施例提供了一种边云协同的微服务调度设备,包括处理 器、存储器以及存储在所述存储器中且被配置由所述处理器执行的计算机 程序,所述处理器执行所述计算机程序实现如上任意一项所述的一种边云 协同的微服务调度方法。A third embodiment of the present invention provides an edge-cloud coordination microservice scheduling device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor executing all The computer program implements the micro-service scheduling method for edge-cloud collaboration as described in any of the above.
基于本发明提供的一种边云协同的微服务调度方法、装置及设备,通 过将一个应用服务程序解耦成多个微服务,并基于改进的粒子群优化算法, 以服务完成时间和边缘服务器的资源利用率为优化目标,通过不断迭代最 终得到实现多目标优化的微服务调度策略;一方面,服务调度使得服务执 行尽可能地靠近数据源,满足了用户低时延的需求,另一方面,将部分非 时延敏感型应用的微服务调度到云服务器上,很好地缓解了边缘服务器的计算压力,避免出现边缘服务器过载的问题,影响服务质量。Based on an edge-cloud coordination microservice scheduling method, device and device provided by the present invention, by decoupling an application service program into multiple microservices, and based on an improved particle swarm optimization algorithm, the service completion time and edge server The optimization goal is to optimize the resource utilization rate of . , schedules some microservices of non-delay-sensitive applications to cloud servers, which relieves the computing pressure of edge servers, avoids the problem of edge server overload, and affects service quality.
附图说明Description of drawings
图1是本发明第一实施例提供的一种边云协同的微服务调度方法流程 示意图;1 is a schematic flowchart of a microservice scheduling method for edge-cloud coordination provided by the first embodiment of the present invention;
图2是本发明提供的服务模块的加权有向无环图模型;Fig. 2 is the weighted directed acyclic graph model of the service module provided by the present invention;
图3是本发明提供的调度环境结构示意图;3 is a schematic diagram of the structure of the scheduling environment provided by the present invention;
图4是本发明第二实施例提供的一种边云协同的微服务调度模块示意 图。Fig. 4 is a schematic diagram of a micro-service scheduling module for edge-cloud collaboration provided by the second embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进 行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例, 而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没 有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的 范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without making creative efforts shall fall within the protection scope of the present invention.
为了更好的理解本发明的技术方案,下面结合附图对本发明实施例进 行详细描述。In order to better understand the technical solutions of the present invention, the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
应当明确,所描述的实施例仅仅是本发明一部分实施例,而不是全部 的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。It should be understood that the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
在本发明实施例中使用的术语是仅仅出于描述特定实施例的目的,而 非旨在限制本发明。在本发明实施例和所附权利要求书中所使用的单数形 式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地 表示其他含义。The terms used in the embodiments of the present invention are only for the purpose of describing specific embodiments, and are not intended to limit the present invention. As used in the embodiments of the present invention and the appended claims, the singular forms "a," "the," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise.
应当理解,本文中使用的术语“和/或”仅仅是一种描述关联对象的关 联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A, 同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表 示前后关联对象是一种“或”的关系。It should be understood that the term "and/or" used in this document is only an association relationship to describe the associated objects, indicating that there may be three kinds of relationships, for example, A and/or B, which may indicate: the existence of A alone, and the existence of A and B at the same time. B, there are three cases of B alone. In addition, the character "/" in this text generally indicates that the related objects before and after are an "or" relationship.
取决于语境,如在此所使用的词语“如果”可以被解释成为“在…… 时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于 语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释 成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时” 或“响应于检测(陈述的条件或事件)”。The word "if" as used herein may be interpreted as "at the time of" or "when" or "in response to determining" or "in response to detecting", depending on the context. Similarly, the phrases "if determined" or "if detected (the stated condition or event)" can be interpreted as "when determined" or "in response to determining" or "when detected (the stated condition or event)," depending on the context )” or “in response to detection (statement of condition or event)”.
实施例中提及的“第一\第二”仅仅是是区别类似的对象,不代表针对 对象的特定排序,可以理解地,“第一\第二”在允许的情况下可以互换特 定的顺序或先后次序。应该理解“第一\第二”区分的对象在适当情况下可 以互换,以使这里描述的实施例能够以除了在这里图示或描述的那些以外 的顺序实施。The "first\second" mentioned in the embodiment is only to distinguish similar objects, and does not represent a specific order for the objects. It is understood that "first\second" can be interchanged with specific order or sequence. It should be understood that the "first\second" distinctions may be interchanged under appropriate circumstances to enable the embodiments described herein to be practiced in sequences other than those illustrated or described herein.
以下结合附图对本发明的具体实施例做详细说明。The specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
本发明公开了一种边云协同的微服务调度方法、装置及设备,旨在满 足用户的低时延需求同时避免边缘服务器过载。The invention discloses a micro-service scheduling method, device and equipment for edge-cloud coordination, aiming at meeting the low-latency requirements of users and avoiding overloading of edge servers.
本发明第一实施例提供了一种边云协同的微服务调度方法,其可由边 云协同的微服务调度设备(以下简称调度设备)来执行,特别的,由升级 设备内的一个或者多个处理器来执行,以实现如下步骤:The first embodiment of the present invention provides a micro-service scheduling method for edge-cloud collaboration, which can be executed by a micro-service scheduling device for edge-cloud collaboration (hereinafter referred to as a scheduling device), in particular, by upgrading one or more devices in the device The processor executes the following steps:
S1,根据实际业务,将复杂应用拆解为不同的服务模块,构建服务模块 的简化模型;S1, according to the actual business, disassemble the complex application into different service modules, and build a simplified model of the service module;
在本实施例中,所述调度设备可为位于云端的服务器,也可以是位于 边缘端的服务器,所述调度设备内可存储有用于微服务调度的数据,实现 资源优化配置。In this embodiment, the scheduling device may be a server located in the cloud, or may be a server located at the edge, and the scheduling device may store data for micro-service scheduling to achieve resource optimization configuration.
在本实施例中,从实际的业务出发,将复杂的应用拆解为不同的服务 模块,每个服务模块又可拆解为若干个职责单一的细粒度的功能组件(即微 服务)。拆解后的微服务有着明确的上下文边界,可单独管理,微服务内部 高度自治,服务之间则通过API交互信息。因为一个服务模块可以表示成 若干个细粒度的微服务节点组合在一起形成不同的拓扑。在本实施例中, 通过一个加权有向无环图来表示一个服务模块的简化调度模型,如图2所 示,每一个服务模块都由很多个松耦合的微服务组成,图2中的节点表示 可以被调度到其他边缘服务器或是云上的微服务,节点的序号用n={n∈1,2,...N} 表示,图中的边表示微服务之间的数据交互,比如函数调用、传输数据, 图2中的边的权值代表了该节点与下一节点之间通信时所传输的指令数量。In this embodiment, starting from the actual business, a complex application is disassembled into different service modules, and each service module can be disassembled into several fine-grained functional components (that is, microservices) with a single responsibility. The disassembled microservices have clear context boundaries and can be managed independently. The microservices are highly autonomous, and the services exchange information through APIs. Because a service module can be represented as several fine-grained microservice nodes combined to form different topologies. In this embodiment, a weighted directed acyclic graph is used to represent a simplified scheduling model of a service module. As shown in Figure 2, each service module is composed of many loosely coupled microservices. The nodes in Figure 2 Represents a microservice that can be dispatched to other edge servers or the cloud. The serial number of the node is represented by n={n∈1,2,...N}. The edge in the graph represents the data interaction between microservices, such as Function calls and data transmission, the weight of the edge in Figure 2 represents the number of instructions transmitted when the node communicates with the next node.
S2,构建备选服务实例集,并根据所述简化模型的粒子总数初始化该边 缘服务器集群提供服务的虚拟机数量;S2, build an alternative service instance set, and initialize the number of virtual machines that this edge server cluster provides services according to the total number of particles of the simplified model;
在本实施例中,基于边云协同思想,构建包括一个云数据中心以及多 个能够与云数据中心通信的边缘节点(即边缘服务器)的备选服务实例集。 获取服务模块的加权有向无环图模型,如图3所示,将其定义为G=(V,E)。 其中,集合V={v1,f,v2,f...vn,f}表示第fth服务模块上所有的微服务的集合, 集合E表示第f th服务模块上各微服务之间的数据依赖,比如微服务i和j之 间的通信用E={e(vi,f,vj,f)|vi,f,vj,f∈V}来表示。将每一个微服务节点看作一个 粒子,并计算微服务总数,然后根据微服务的总数初始化边缘服务器集群提供服务的虚拟机数量;In this embodiment, based on the idea of edge-cloud collaboration, a set of candidate service instances including one cloud data center and multiple edge nodes (ie, edge servers) capable of communicating with the cloud data center is constructed. Obtain the weighted directed acyclic graph model of the service module, as shown in Figure 3, and define it as G=(V, E). Wherein, the set V={v1,f ,v2,f ...vn,f } represents the set of all microservices on the fth service module, and the set E represents the relationship between the microservices on the fth service module For example, the communication between microservices i and j is represented by E={e(vi,f ,vj,f )|vi,f ,vj,f ∈V}. Consider each microservice node as a particle, calculate the total number of microservices, and then initialize the number of virtual machines provided by the edge server cluster according to the total number of microservices;
S3,预定义粒子的适应度模型、约束条件、服务器集群的搜索范围以及 粒子的迁移策略,并对所述适应度模型中的各个因子进行初始化,其中, 所述适应度模型的因子可以包括所述因子包括惯性因子θ、个体影响因子 k1、群体影响因子k2、个体学习因子r1、群体学习因子r2、个体最优位置 pbestt(n,f)、群体最优位置gbestt(n,f)、粒子当前位置Lt(n,f)、粒子速度 Vt(n,f)、服务截止时延约束Tddl、最大迭代次数Imax;但不仅限于此。S3, predefine the fitness model of the particle, the constraints, the search range of the server cluster and the migration strategy of the particle, and initialize each factor in the fitness model, wherein the factors of the fitness model may include all The aforementioned factors include inertia factor θ, individual influence factor k1 , group influence factor k2 , individual learning factor r1 , group learning factor r2 , individual optimal position pbestt (n, f), group optimal position gbestt ( n,f), particle current position Lt (n,f), particle velocity Vt (n,f), service deadline delay constraint Tddl , maximum number of iterations Imax ; but not limited thereto.
其中,其中,惯性因子θ、个体随机影响因子k1、群体随机影响因子k2、 个体常数学习因子r1、群体常数学习因子r2的选取范围需满足如下要求:Among them, the selection range of inertia factor θ, individual random influence factor k1 , group random influence factor k2 , individual constant learning factor r1 , and group constant learning factor r2 must meet the following requirements:
在本实施例中,所述粒子的适应度模型可以为:In this embodiment, the fitness model of the particle may be:
其中,Fitness(n,f)为粒子的适应度函数,Ttotal(n,f)为总系统时延,AU(s) 为边缘服务器集群的平均资源利用率,s={1,2,3...Q}为边缘服务器集群的搜 索范围,Tddl为服务截止时延约束,f为服务模块序号,n为该服务模块中 粒子的序号,α和β分别为Ttotal(n,f)和AU(s)的权重因子。α和β为(0,1)区间 的任一常数值,用于根据优化需求调整时延和资源利用率分别占的比例。 最小时延及最大资源利用率是我们所追求的目标,因此,当适应度函数 Fitness(n,f)最小我们选择出最合适的调度策略。Among them, Fitness(n,f) is the fitness function of the particle, Ttotal (n,f) is the total system delay, AU(s) is the average resource utilization of the edge server cluster, s={1,2,3 ...Q} is the search range of the edge server cluster, Tddl is the service deadline delay constraint, f is the serial number of the service module, n is the serial number of the particle in the service module, α and β are Ttotal (n,f) and AU(s) weighting factors. α and β are any constant values in the (0,1) interval, which are used to adjust the respective proportions of delay and resource utilization according to optimization requirements. The minimum delay and maximum resource utilization are our goals. Therefore, when the fitness function Fitness(n, f) is the smallest, we choose the most suitable scheduling strategy.
其中,AU(s)的推导过程如下:Among them, the derivation process of AU(s) is as follows:
各边缘服务器资源利用率μq(s)是根据当前正在提供服务的虚拟机数 量VMserve与虚拟机的总量VMtotal做商得到的。各边缘服务器资源利用率μq(s) 的表达式为:The resource utilization μq (s) of each edge server is obtained according to the number of virtual machines currently providing services, VMserve , and the total number of virtual machines, VMtotal . The expression of each edge server resource utilization μq (s) is:
其中,U为虚拟机的总量,f为服务模块的序号,且满足(1≤f≤F),n 为调度至该边缘服务器的微服务,且数量上满足(1≤n≤|Vm|),vn,f表示序号 为f的服务模块上的第n th微服务,λq(s)用于判断是否有微服务被迁移到 序号为q的边缘服务器上,若有,λq(s)的值为1,若无,λq(s)的值为0, 用公式表示为:Among them, U is the total number of virtual machines, f is the serial number of the service module, and satisfies (1≤f≤F), n is the microservice scheduled to the edge server, and the quantity satisfies (1≤n≤|Vm |), vn,f represents the nth microservice on the service module with serial number f, λq (s) is used to judge whether any microservices have been migrated to the edge server with serial number q, if so, λq The value of (s) is 1, if not, the value of λq (s) is 0, which is expressed as:
表示微服务vn,f已迁移到序号为q的边缘服务器上,否则, Indicates that the microservice vn,f has been migrated to the edge server with the serial number q, otherwise,
用公式表示为:The formula is expressed as:
在计算所有的边缘服务器的平均利用率之前,需要先计算部署了微服 务的边缘服务器的数量:Before calculating the average utilization of all edge servers, it is necessary to calculate the number of edge servers with microservices deployed:
所有部署了微服务的边缘服务器的平均利用率AU(s):Average utilization AU(s) of all edge servers with microservices deployed:
随着每一次迭代的进行,粒子位置(即微服务迁移策略)会不断更新, 系统的时延和边缘服务器的资源利用率也会随之改变,该适应度函数综合 考虑了服务完成时间和边缘服务器集群的资源利用率两个指标,通过多次 迭代,最终可以找到满足服务截止时延约束条件,且同时使得适应度函数 Fitness(n,f)最小的最优调度策略。With each iteration, the particle position (that is, the microservice migration strategy) will be continuously updated, and the system delay and the resource utilization of the edge server will also change. The fitness function comprehensively considers the service completion time and edge There are two indicators of resource utilization of the server cluster. Through multiple iterations, the optimal scheduling strategy that satisfies the service deadline delay constraints and at the same time minimizes the fitness function Fitness(n, f) can be found.
S4,通过粒子的适应度模型计算每个粒子的适应度值,根据粒子的适 应度值更新粒子个体最优解以及粒子群体最优解,根据所述粒子个体最优 解以及所述粒子群体最优解更新粒子下一时刻的速度和位置;S4, calculate the fitness value of each particle through the fitness model of the particle, update the optimal solution of the particle individual and the optimal solution of the particle group according to the fitness value of the particle, and update the optimal solution of the individual particle and the optimal solution of the particle group according to the optimal solution of the individual particle and the optimal solution of the particle group. The optimal solution updates the velocity and position of the particle at the next moment;
需要说明的是,粒子分别通过式⑴、式(2)不断更新自身的速度和位置, 寻找最优解位置:It should be noted that the particle continuously updates its own speed and position through formula (1) and formula (2), respectively, to find the optimal solution position:
V(t+1)(n,f)=θVt(n,f)+k1r1(pbestt(n,f)-Lt(n,f))+k2r2(gbestt(n,f)-Lt(n,f)) (1)V(t+1) (n,f)=θVt (n,f)+k1 r1 (pbestt (n,f)-Lt (n,f))+k2 r2 (gbestt ( n,f)-Lt (n,f)) (1)
Lt+1(n,f)=Lt(n,f)+Vt+1(n,f) (2)Lt+1 (n,f)=Lt (n,f)+Vt+1 (n,f) (2)
公式(1)由3部分组成,第1部分是粒子当前的速度,表明了粒子目前 的状态。惯性因子θ越大则粒子当前速度对下一时刻位置的影响越大。第2 部分是粒子的自我认知部分,表明了粒子的自我总结能力,个体学习因子c1越大则粒子的全局搜索能力越强。第3部分是粒子间信息共享部分,表明 了粒子具有向群体中优秀个体学习的能力,群体学习因子c2越大则粒子的 收敛速度越快。The formula (1) consists of three parts, the first part is the current speed of the particle, which indicates the current state of the particle. The larger the inertia factor θ, the greater the influence of the particle's current velocity on the position at the next moment. The second part is the self-recognition part of the particle, which shows the self-summarization ability of the particle. The larger the individual learning factor c1 is, the stronger the global search ability of the particle is. The third part is the information sharing part between particles, which shows that the particles have the ability to learn from the excellent individuals in the group.The larger the group learning factor c2, the faster the convergence speed of the particles.
需要说明的是,对每个粒子,将其当前位置适应度值与其经历过的最 好位置作比较,如果较好,则将其作为当前的最好位置,更新个体最优解; 在本实施例中具体为:将个体粒子当前时延适应度值与该粒子历史最优时 延适应度值相比较,若当前时延适应度值小于历史最优时延适应度值,则 更新个体最优时延值;将个体粒子当前资源利用率值与该粒子历史最优资 源利用率值相比较,若当前资源利用率值大于历史最优资源利用率值,则更新个体最优资源利用率值;然后根据个体最优时延值和个体最优资源利 用率值,更新粒子个体最优解以及粒子群体最优解;It should be noted that, for each particle, its current position fitness value is compared with the best position it has experienced, and if it is better, it is taken as the current best position to update the individual optimal solution; in this implementation In the example, the current delay fitness value of the individual particle is compared with the historical optimal delay fitness value of the particle. If the current delay fitness value is less than the historical optimal delay fitness value, the individual optimal delay fitness value is updated. Delay value; compare the current resource utilization value of the individual particle with the historical optimal resource utilization value of the particle, if the current resource utilization value is greater than the historical optimal resource utilization value, update the individual optimal resource utilization value; Then, according to the individual optimal delay value and individual optimal resource utilization value, update the optimal solution of individual particle and optimal solution of particle group;
在本实施例中,所述根据粒子的适应度值更新粒子个体最优解以及粒 子群体最优解具体为:In the present embodiment, the updating of the particle individual optimal solution and the particle group optimal solution according to the fitness value of the particle is specifically:
对于双目标优化来说,粒子群体最优适应度值gbestt(n,f)存在群体最优 时延值gbestt(n,f)T和群体最优资源利用率值gbestt(n,f)AU两种,因此,所述 粒子群体最优解gbestt(n,f)取群体最优时延值gbestt(n,f)T和群体最优资源 利用率值gbestt(n,f)AU两者的平均值,即:For dual-objective optimization, the optimal fitness value gbestt (n,f) of the particle group has the optimal time delay value gbestt (n,f)T and the optimal resource utilization value gbestt (n,f) of the group )AU two kinds, therefore, the optimal solution gbestt (n,f) of the particle group takes the optimal time delay value gbestt (n, f)T of the group and the optimal resource utilization value of the group gbestt (n, f ) ) the average of bothAU , namely:
gbestt(n,f)=Average(gbestt(n,f)T,gbestt(n,f)AU)gbestt (n,f)=Average(gbestt (n,f)T ,gbestt (n,f)AU )
同理,对于单个粒子来说,个体最优适应度值pbestt(n,f)亦存在个体 最优时延值pbestt(n,f)T和个体最优资源利用率值pbestt(n,f)AU两种,但是, 个体最优时延值和个体最优成本值的制约相较于粒子群体来说更大。因此, 粒子个体最优解的选取根据距离因子和Dg的计算结果确定, 计算公式如下:Similarly, for a single particle, the individual optimal fitness value pbestt (n,f) also has individual optimal delay value pbestt (n,f)T and individual optimal resource utilization value pbestt (n ,f)AU two kinds, but the constraints of individual optimal delay value and individual optimal cost value are greater than that of particle swarm. Therefore, the optimal solution of particle individual is selected according to the distance factor The calculation results of and Dg are determined, and the calculation formula is as follows:
Dg=Dis(gbestt(n,f)T,gbestt(n,f)AU)=|gbestt(n,f)T-gbestt(n,f)AU|Dg =Dis(gbestt (n,f)T ,gbestt (n,f)AU )=|gbestt (n,f)T -gbestt (n,f)AU |
其中,表示个体优化目标差异程度,Dg表示群体优化目标差异程度, pbestt(n,f)T为个体最优时延值时粒子所在位置,pbestt(n,f)AU为个体最优资 源利用率值时粒子所在位置,Dis表示取两个位置之间的直线距离;in, Represents the difference degree of individual optimization goals, Dg represents the difference degree of group optimization goals, pbestt (n,f)T is the position of the particle at the individual optimal delay value, pbestt (n,f)AU is the individual optimal resource utilization When the rate value is the position of the particle, Dis represents the straight-line distance between the two positions;
当时,此时,新一轮迭代的pbestt(n,f)在pbestt(n,f)T和 pbestt(n,f)AU中随机选取,选取公式如下:when , at this time, pbestt (n,f) of a new round of iteration is randomly selected from pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=RandSelect{pbestt(n,f)T,pbestt(n,f)AU};pbestt (n,f)=RandSelect{pbestt (n,f)T ,pbestt (n,f)AU };
当时,此时,新一轮迭代的pbestt(n,f)取pbestt(n,f)T和 pbestt(n,f)AU的平均值,选取公式如下:when , at this time, the pbestt (n,f) of the new round of iteration takes the average value of pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=Average{pbestt(n,f)T,pbestt(n,f)AU}。pbestt (n,f)=Average{pbestt (n,f)T ,pbestt (n,f)AU }.
在本实施例中,所述根据所述粒子个体最优解以及所述粒子群体最优 解更新粒子下一时刻的速度和位置,具体为:In the present embodiment, the speed and position of the particle at the next moment are updated according to the optimal solution of the particle individual and the optimal solution of the particle group, specifically:
根据更新后的pbestt(n,f)及gbestt(n,f)对粒子下一时刻的速度以及下一 时刻的位置更新的公式如下:According to the updated pbestt (n, f) and gbestt (n, f), the formula for updating the particle's velocity at the next moment and the position at the next moment is as follows:
V(t+1)(n,f)=θVt(n,f)+k1r1(pbestt(n,f)-Lt(n,f))+k2r2(gbestt(n,f)-Lt(n,f)) (1)V(t+1) (n,f)=θVt (n,f)+k1 r1 (pbestt (n,f)-Lt (n,f))+k2 r2 (gbestt ( n,f)-Lt (n,f)) (1)
Lt+1(n,f)=Lt(n,f)+Vt+1(n,f) (2)Lt+1 (n,f)=Lt (n,f)+Vt+1 (n,f) (2)
其中,Vt+1(n,f)为粒子下一时刻的速度,θ为惯性因子,Vt(n,f)为粒子 当前时刻的速度,k1为个体随机影响因子、k2为群体随机影响因子、r1为个 体常数学习因子、r2为群体常数学习因子,Lt(n,f)为粒子当前所在位置, Lt+1(n,f)为粒子下一时刻所在位置。Among them, Vt+1 (n, f) is the velocity of the particle at the next moment, θ is the inertia factor, Vt (n, f) is the velocity of the particle at the current moment, k1 is the individual random influence factor, and k2 is the group Random influence factor, r1 is the individual constant learning factor, r2 is the group constant learning factor, Lt (n, f) is the current position of the particle, and Lt+1 (n, f) is the position of the particle at the next moment.
S5,获取当前粒子的总时延,并判断其是否小于等于服务截止时延约束, 若是,进入步骤S6,若否,则重新初始化当前粒子的位置,然后返回步骤4;S5, obtain the total delay of the current particle, and determine whether it is less than or equal to the service deadline delay constraint, if so, go to step S6, if not, re-initialize the position of the current particle, and then return to
需要说明的是,当前系统总时延需满足服务截止时延约束,关系式表 示为:It should be noted that the total delay of the current system needs to meet the service deadline delay constraint, and the relationship is expressed as:
Ttotal(n,f)≤TddlTtotal (n,f)≤Tddl
若系统总时延不满足约束条件,则表示当前的调度策略不可取,舍弃 当前的调度策略,并重新初始化该粒子的位置,然后返回步骤4重新计算 每个粒子的适应度值。If the total system delay does not meet the constraints, it means that the current scheduling strategy is not desirable, discard the current scheduling strategy, re-initialize the position of the particle, and then return to step 4 to recalculate the fitness value of each particle.
S6,在判断到粒子当前位置与全局最优粒子的距离小于设定的位置因 子且更新后的速度小于设定的速度因子,保留更新后粒子的速度和位置,并 判断当前迭代次数是否小于等于最大迭代次数,若是,返回步骤4,若否,输 出结果。S6, after judging that the distance between the current position of the particle and the global optimal particle is less than the set position factor and the updated speed is less than the set speed factor, keep the speed and position of the updated particle, and judge whether the current number of iterations is less than or equal to The maximum number of iterations, if yes, go back to
其中,d(d=|xt-gbestt|)为粒子的当前位置与全局最优粒子的距离,γ 为预先设定的位置因子,Vt+1(n,f)为更新后的速度,δ为预先设定的速度因 子。若d<γ,且更新后的速度Vt+1(n,f)小于速度因子δ,则在解空间中重 新初始化该粒子的位置。若是满足otherwise中的条件,则保留该粒子的更 新后的位置和速度。Among them, d(d=|xt -gbestt |) is the distance between the current position of the particle and the global optimal particle, γ is the preset position factor, and Vt+1 (n,f) is the updated velocity , δ is a preset speed factor. If d<γ, and the updated velocity Vt+1 (n, f) is less than the velocity factor δ, then re-initialize the particle's position in the solution space. If the conditions in otherwise are met, the updated position and velocity of the particle are preserved.
本发明实施例中提供的技术方案,与现有技术相比,至少具有以下技 术效果或优点:The technical scheme provided in the embodiment of the present invention, compared with the prior art, at least has the following technical effects or advantages:
本发明实施例结合了边缘平台和微服务架构去解决传统云平台的大服 务调度问题。在过去有关边缘计算服务调度的研究中,往往采用的是一种 粗粒度的服务调度方法,这种粗粒度的调度方法以整个软件系统的调度为 对象,不会根据其实际的业务功能将其进行划分,而是直接将整个应用服 务调度到某一边缘服务器上。本发明实施例提出采用一种服务划分算法将 一个应用服务程序解耦成多个微服务,并基于改进的粒子群优化算法,以 服务完成时间和边缘服务器的资源利用率为优化目标,通过不断迭代最终 得到实现多目标优化的微服务调度策略;一方面,服务调度使得服务执行 尽可能地靠近数据源,满足了用户低时延的需求,另一方面,将部分非时 延敏感型应用的微服务调度到云服务器上,很好地缓解了边缘服务器的计 算压力,避免出现边缘服务器过载的问题,影响服务质量。The embodiments of the present invention combine the edge platform and the micro-service architecture to solve the large service scheduling problem of the traditional cloud platform. In the past research on edge computing service scheduling, a coarse-grained service scheduling method is often used. This coarse-grained scheduling method takes the scheduling of the entire software system as the object, and does not assign it according to its actual business function. Instead, the entire application service is directly scheduled to an edge server. The embodiment of the present invention proposes to use a service partitioning algorithm to decouple an application service program into multiple micro-services, and based on an improved particle swarm optimization algorithm, with the service completion time and the resource utilization rate of the edge server as the optimization goals, through continuous Iteration finally obtains a microservice scheduling strategy that realizes multi-objective optimization; on the one hand, service scheduling makes the service execution as close to the data source as possible to meet the user's low latency requirements; Microservices are scheduled to cloud servers, which relieves the computing pressure of edge servers and avoids the problem of edge server overload, which affects service quality.
同时,针对一个庞大的应用程序拆解后将得到数量巨大的功能组件和 数据依赖,本发明实施例采用了PSO算法,凭借其快速收敛的特性可以实 现微服务的快速部署,以实时响应用户请求。At the same time, since a huge number of functional components and data dependencies will be obtained after dismantling a huge application program, the embodiment of the present invention adopts the PSO algorithm, which can realize the rapid deployment of micro-services by virtue of its rapid convergence characteristics, so as to respond to user requests in real time. .
本发明实施例还充分考虑了算法收敛速度过快使得粒子容易陷入局部 最优的问题,在寻找微服务迁移策略的过程中引入了位置因子γ(γ≥0)和速 度因子δ(δ≥0),有效降低了PSO算法结果局部收敛的概率。The embodiment of the present invention also fully considers the problem that the algorithm convergence speed is too fast, which makes the particle easily fall into the local optimum. In the process of searching for the microservice migration strategy, the position factor γ (γ≥0) and the speed factor δ (δ≥0) are introduced. ), effectively reducing the probability of local convergence of the PSO algorithm results.
本发明可以适用于较为复杂的网络环境中,能够根据服务提供商的需 求,对服务进行部署,优化边缘服务器的资源配置,同时为用户提供低时 延的高质量服务。且该方法具有算法易于理解、实时性强、可靠性高的特 点,因此,易于推广,可应用范围广,具有很好的前景。The present invention can be applied to a relatively complex network environment, and can deploy services according to the requirements of service providers, optimize the resource configuration of edge servers, and provide users with high-quality services with low delay. And this method has the characteristics of easy to understand algorithm, strong real-time performance and high reliability. Therefore, it is easy to popularize, has a wide range of applications, and has good prospects.
请参阅图4,本发明第二实施例提供了一种边云协同的微服务调度装 置,包括:Please refer to Fig. 4, the second embodiment of the present invention provides a kind of side-cloud coordination microservice scheduling device, including:
简化模型构建模块S21,用于根据实际业务,将复杂应用拆解为不同的 服务模块,构建服务模块的简化模型;The simplified model building module S21 is used to disassemble the complex application into different service modules according to the actual business, and build a simplified model of the service module;
第一初始化模块S22,用于构建备选服务实例集,并根据所述简化模型 的粒子总数初始化该边缘服务器集群提供服务的虚拟机数量;The first initialization module S22 is used to construct an alternative service instance set, and initialize the number of virtual machines that the edge server cluster provides services according to the total number of particles in the simplified model;
第二初始化模块S23,用于预定义粒子的适应度模型、约束条件、服务 器集群的搜索范围以及粒子的迁移策略,并对所述适应度模型中的各个因 子进行初始化;The second initialization module S23 is used to predefine the fitness model of the particle, the constraints, the search range of the server cluster and the migration strategy of the particle, and initialize each factor in the fitness model;
更新模块S24,用于通过粒子的适应度模型计算每个粒子的适应度值, 根据粒子的适应度值更新粒子个体最优解以及粒子群体最优解,根据所述 粒子个体最优解以及所述粒子群体最优解更新粒子下一时刻的速度和位 置;The update module S24 is used to calculate the fitness value of each particle through the fitness model of the particle, update the optimal solution of the particle individual and the optimal solution of the particle group according to the fitness value of the particle, and update the optimal solution of the individual particle and the optimal solution of the particle group according to the fitness value of the particle. The optimal solution of the particle swarm updates the velocity and position of the particle at the next moment;
总时延获取模块S25,用于获取当前粒子的总时延,并判断其是否小于 等于服务截止时延约束,若是,进入步骤S6,若否,则重新初始化当前粒子 的位置,并通知更新模块;The total delay obtaining module S25 is used to obtain the total delay of the current particle, and determine whether it is less than or equal to the service deadline delay constraint, if so, go to step S6, if not, re-initialize the position of the current particle, and notify the update module ;
判断模块S26,用于在判断到粒子当前位置与全局最优粒子的距离小于 设定的位置因子且更新后的速度小于设定的速度因子,保留更新后粒子的 速度和位置,并判断当前迭代次数是否小于等于最大迭代次数,若是,通知 更新模块,若否,输出结果。The judgment module S26 is used for judging that the distance between the current position of the particle and the global optimal particle is less than the set position factor and the updated speed is less than the set speed factor, retain the speed and position of the updated particle, and judge the current iteration Whether the number of iterations is less than or equal to the maximum number of iterations, if so, notify the update module, if not, output the result.
优选地,所述粒子的适应度模型为:Preferably, the fitness model of the particle is:
其中,Fitness(n,f)为粒子的适应度函数,Ttotal(n,f)为总系统时延,AU(s) 为边缘服务器集群的平均资源利用率,s={1,2,3...Q}为边缘服务器集群的搜 索范围,Tddl为服务截止时延约束,f为服务模块序号,n为该服务模块中 粒子的序号,α和β分别为Ttotal(n,f)和AU(s)的权重因子。Among them, Fitness(n,f) is the fitness function of the particle, Ttotal (n,f) is the total system delay, AU(s) is the average resource utilization of the edge server cluster, s={1,2,3 ...Q} is the search range of the edge server cluster, Tddl is the service deadline delay constraint, f is the serial number of the service module, n is the serial number of the particle in the service module, α and β are Ttotal (n,f) and AU(s) weighting factors.
优选地,所述更新模块具体用于:Preferably, the update module is specifically used for:
所述粒子群体最优解gbestt(n,f)取群体最优时延值gbestt(n,f)T和群体 最优资源利用率值gbestt(n,f)AU两者的平均值,即:The particle swarm optimal solution gbestt (n,f) takes the average value of the group optimal time delay value gbestt (n, f)T and the group optimal resource utilization value gbestt (n, f)AU ,which is:
gbestt(n,f)=Average(gbestt(n,f)T,gbestt(n,f)AU)gbestt (n,f)=Average(gbestt (n,f)T ,gbestt (n,f)AU )
粒子个体最优解的选取根据距离因子和Dg的计算结果 确定,计算公式如下:particle individual optimal solution is selected according to the distance factor The calculation results of and Dg are determined, and the calculation formula is as follows:
Dg=Dis(gbestt(n,f)T,gbestt(n,f)AU)=|gbestt(n,f)T-gbestt(n,f)AU|Dg =Dis(gbestt (n,f)T ,gbestt (n,f)AU )=|gbestt (n,f)T -gbestt (n,f)AU |
其中,表示个体优化目标差异程度,Dg表示群体优化目标差异程度, pbestt(n,f)T为个体最优时延值时粒子所在位置,pbestt(n,f)AU为个体最优资 源利用率值时粒子所在位置,Dis表示取两个位置之间的直线距离;in, Represents the difference degree of individual optimization goals, Dg represents the difference degree of group optimization goals, pbestt (n,f)T is the position of the particle at the individual optimal delay value, pbestt (n,f)AU is the individual optimal resource utilization When the rate value is the position of the particle, Dis represents the straight-line distance between the two positions;
当时,此时,新一轮迭代的pbestt(n,f)在pbestt(n,f)T和 pbestt(n,f)AU中随机选取,选取公式如下:when , at this time, pbestt (n,f) of a new round of iteration is randomly selected from pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=RandSelect{pbestt(n,f)T,pbestt(n,f)AU};pbestt (n,f)=RandSelect{pbestt (n,f)T ,pbestt (n,f)AU };
当时,此时,新一轮迭代的pbestt(n,f)取pbestt(n,f)T和 pbestt(n,f)AU的平均值,选取公式如下:when , at this time, the pbestt (n,f) of the new round of iteration takes the average value of pbestt (n,f)T and pbestt (n,f)AU , and the selection formula is as follows:
pbestt(n,f)=Average{pbestt(n,f)T,pbestt(n,f)AU}。pbestt (n,f)=Average{pbestt (n,f)T ,pbestt (n,f)AU }.
优选地,所述更新模块具体用于:Preferably, the update module is specifically used for:
根据更新后的pbestt(n,f)及gbestt(n,f)对粒子下一时刻的速度以及下一 时刻的位置更新的公式如下:According to the updated pbestt (n, f) and gbestt (n, f), the formula for updating the particle's velocity at the next moment and the position at the next moment is as follows:
V(t+1)(n,f)=θVt(n,f)+k1r1(pbestt(n,f)-Lt(n,f))+k2r2(gbestt(n,f)-Lt(n,f)) (1)V(t+1) (n,f)=θVt (n,f)+k1 r1 (pbestt (n,f)-Lt (n,f))+k2 r2 (gbestt ( n,f)-Lt (n,f)) (1)
Lt+1(n,f)=Lt(n,f)+Vt+1(n,f) (2)Lt+1 (n,f)=Lt (n,f)+Vt+1 (n,f) (2)
其中,Vt+1(n,f)为粒子下一时刻的速度,θ为惯性因子,Vt(n,f)为粒子 当前时刻的速度,k1为个体随机影响因子、k2为群体随机影响因子、r1为个 体常数学习因子、r2为群体常数学习因子,Lt(n,f)为粒子当前所在位置, Lt+1(n,f)为粒子下一时刻所在位置。Among them, Vt+1 (n, f) is the velocity of the particle at the next moment, θ is the inertia factor, Vt (n, f) is the velocity of the particle at the current moment, k1 is the individual random influence factor, and k2 is the group Random influence factor, r1 is the individual constant learning factor, r2 is the group constant learning factor, Lt (n, f) is the current position of the particle, and Lt+1 (n, f) is the position of the particle at the next moment.
本发明第三实施例提供了一种边云协同的微服务调度设备,包括处理 器、存储器以及存储在所述存储器中且被配置由所述处理器执行的计算机 程序,所述处理器执行所述计算机程序实现如上任意一项所述的一种边云 协同的微服务调度方法。A third embodiment of the present invention provides an edge-cloud coordination microservice scheduling device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor executing all The computer program implements the micro-service scheduling method for edge-cloud collaboration as described in any of the above.
基于本发明提供的一种边云协同的微服务调度方法、装置及设备,通 过将一个应用服务程序解耦成多个微服务,并基于改进的粒子群优化算法, 以服务完成时间和边缘服务器的资源利用率为优化目标,通过不断迭代最 终得到实现多目标优化的微服务调度策略;一方面,服务调度使得服务执 行尽可能地靠近数据源,满足了用户低时延的需求,另一方面,将部分非 时延敏感型应用的微服务调度到云服务器上,很好地缓解了边缘服务器的计算压力,避免出现边缘服务器过载的问题,影响服务质量。Based on an edge-cloud coordination microservice scheduling method, device and device provided by the present invention, by decoupling an application service program into multiple microservices, and based on an improved particle swarm optimization algorithm, the service completion time and edge server The optimization goal is to optimize the resource utilization rate of . , schedules some microservices of non-delay-sensitive applications to cloud servers, which relieves the computing pressure of edge servers, avoids the problem of edge server overload, and affects service quality.
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并 不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本 发明的保护范围应该以权利要求的保护范围为准。The above is only a preferred embodiment of the present invention, but the protection scope of the present invention is not limited to this. Substitutions should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110753438.7ACN113821317B (en) | 2021-07-02 | 2021-07-02 | Side cloud cooperative microservice scheduling method, device and equipment |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110753438.7ACN113821317B (en) | 2021-07-02 | 2021-07-02 | Side cloud cooperative microservice scheduling method, device and equipment |
| Publication Number | Publication Date |
|---|---|
| CN113821317Atrue CN113821317A (en) | 2021-12-21 |
| CN113821317B CN113821317B (en) | 2023-08-11 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110753438.7AActiveCN113821317B (en) | 2021-07-02 | 2021-07-02 | Side cloud cooperative microservice scheduling method, device and equipment |
| Country | Link |
|---|---|
| CN (1) | CN113821317B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114338504A (en)* | 2022-03-15 | 2022-04-12 | 武汉烽火凯卓科技有限公司 | Micro-service deployment and routing method based on network edge system |
| CN114818446A (en)* | 2021-12-22 | 2022-07-29 | 安徽继远软件有限公司 | Power service decomposition method and system facing 5G cloud edge-end cooperation |
| CN116170885A (en)* | 2022-12-06 | 2023-05-26 | 安徽继远软件有限公司 | Electric power 5G network slice deterministic service guaranteeing method and device |
| CN118674051A (en)* | 2024-08-21 | 2024-09-20 | 苏州元脑智能科技有限公司 | Deep learning model reasoning parameter optimizing method, device, equipment and medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018072351A1 (en)* | 2016-10-20 | 2018-04-26 | 北京工业大学 | Method for optimizing support vector machine on basis of particle swarm optimization algorithm |
| CN108566663A (en)* | 2018-01-10 | 2018-09-21 | 重庆邮电大学 | SDWSN energy consumption balance routing algorithms based on disturbance particle group optimizing |
| CN111756812A (en)* | 2020-05-29 | 2020-10-09 | 华南理工大学 | An energy-aware edge-cloud collaborative dynamic offload scheduling method |
| CN111813506A (en)* | 2020-07-17 | 2020-10-23 | 华侨大学 | A resource-aware computing migration method, device and medium based on particle swarm optimization |
| CN112882723A (en)* | 2021-02-24 | 2021-06-01 | 武汉大学 | Edge service deployment method facing parallel micro-service combination |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018072351A1 (en)* | 2016-10-20 | 2018-04-26 | 北京工业大学 | Method for optimizing support vector machine on basis of particle swarm optimization algorithm |
| CN108566663A (en)* | 2018-01-10 | 2018-09-21 | 重庆邮电大学 | SDWSN energy consumption balance routing algorithms based on disturbance particle group optimizing |
| CN111756812A (en)* | 2020-05-29 | 2020-10-09 | 华南理工大学 | An energy-aware edge-cloud collaborative dynamic offload scheduling method |
| CN111813506A (en)* | 2020-07-17 | 2020-10-23 | 华侨大学 | A resource-aware computing migration method, device and medium based on particle swarm optimization |
| CN112882723A (en)* | 2021-02-24 | 2021-06-01 | 武汉大学 | Edge service deployment method facing parallel micro-service combination |
| Title |
|---|
| 石文玉;张蕊;: "边云协同计算中安全感知的工作流任务调度策略", 长春师范大学学报, no. 08* |
| 简琤峰 等: "面向边缘计算的改进混沌蝙蝠群协同调度算法", 《小型微型计算机系统》* |
| 赵龙乾: "基于云边协同计算架构的资源分配和任务调度方法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114818446A (en)* | 2021-12-22 | 2022-07-29 | 安徽继远软件有限公司 | Power service decomposition method and system facing 5G cloud edge-end cooperation |
| CN114818446B (en)* | 2021-12-22 | 2023-03-24 | 安徽继远软件有限公司 | Power service decomposition method and system facing 5G cloud edge terminal cooperation |
| WO2023116067A1 (en)* | 2021-12-22 | 2023-06-29 | 安徽继远软件有限公司 | Power service decomposition method and system for 5g cloud-edge-end collaboration |
| CN114338504A (en)* | 2022-03-15 | 2022-04-12 | 武汉烽火凯卓科技有限公司 | Micro-service deployment and routing method based on network edge system |
| CN116170885A (en)* | 2022-12-06 | 2023-05-26 | 安徽继远软件有限公司 | Electric power 5G network slice deterministic service guaranteeing method and device |
| CN118674051A (en)* | 2024-08-21 | 2024-09-20 | 苏州元脑智能科技有限公司 | Deep learning model reasoning parameter optimizing method, device, equipment and medium |
| CN118674051B (en)* | 2024-08-21 | 2024-12-17 | 苏州元脑智能科技有限公司 | Deep learning model reasoning parameter optimizing method, device, equipment and medium |
| Publication number | Publication date |
|---|---|
| CN113821317B (en) | 2023-08-11 |
| Publication | Publication Date | Title |
|---|---|---|
| CN113821317A (en) | A microservice scheduling method, device and device for edge-cloud collaboration | |
| Zhang et al. | Online adaptive interference-aware VNF deployment and migration for 5G network slice | |
| CN111522661A (en) | A microservice management system, deployment method and related equipment | |
| Ali et al. | The study on load balancing strategies in distributed computing system | |
| CN117041330B (en) | Edge micro-service fine granularity deployment method and system based on reinforcement learning | |
| CN117938755B (en) | A data flow control method, network switching subsystem and intelligent computing platform | |
| CN114356548A (en) | Dynamic expansion and placement method and device for edge computing service | |
| CN108768716A (en) | A kind of micro services routing resource and device | |
| Tseng et al. | Link-aware virtual machine placement for cloud services based on service-oriented architecture | |
| CN110531996A (en) | Calculating task discharging method based on particle group optimizing under a kind of more thin cloud environment | |
| CN103023936B (en) | Multi-hierarchy network system and task executing method based on same | |
| CN116996941A (en) | Computing power offloading method, device and system based on distribution network cloud-edge collaboration | |
| Liu et al. | Service resource management in edge computing based on microservices | |
| Pasyeka et al. | Development algorithmic model for optimization of distributed fault-tolerant web-systems | |
| CN112612553B (en) | Edge computing task unloading method based on container technology | |
| CN115022926A (en) | A multi-objective optimized container migration method based on resource balance | |
| CN110471621A (en) | A kind of edge towards real time data processing application under isomery peripheral surroundings cooperates with storage method | |
| Convolbo et al. | DRASH: A data replication-aware scheduler in geo-distributed data centers | |
| Tun et al. | Resource aware placement of IoT devices in fog computing | |
| CN119652891B (en) | Heterogeneous end Bian Yun cooperative computing method and system based on elastic coupling mechanism | |
| Dimitrios et al. | Simulation and performance evaluation of a fog system | |
| Shu et al. | Deploying network functions for multiaccess edge-IoT with deep reinforcement learning | |
| Ali et al. | A hybrid elephant herding optimization and harmony search algorithm for potential load balancing in cloud environments | |
| CN112073496A (en) | Data placement method based on load balancing in geographically distributed cloud | |
| CN117294712A (en) | Dynamic calculation unloading strategy based on task group optimization |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |