Movatterモバイル変換


[0]ホーム

URL:


CN112817653A - Cloud-side-based federated learning calculation unloading computing system and method - Google Patents

Cloud-side-based federated learning calculation unloading computing system and method
Download PDF

Info

Publication number
CN112817653A
CN112817653ACN202110089708.9ACN202110089708ACN112817653ACN 112817653 ACN112817653 ACN 112817653ACN 202110089708 ACN202110089708 ACN 202110089708ACN 112817653 ACN112817653 ACN 112817653A
Authority
CN
China
Prior art keywords
edge
local
cloud
data
local device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110089708.9A
Other languages
Chinese (zh)
Inventor
伍卫国
张祥俊
柴玉香
杨诗园
王雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong UniversityfiledCriticalXian Jiaotong University
Priority to CN202110089708.9ApriorityCriticalpatent/CN112817653A/en
Publication of CN112817653ApublicationCriticalpatent/CN112817653A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于云边端的联邦学习计算卸载资源分配系统及方法,以实现对计算任务卸载和资源分配做出准确决策,消除了求解组合优化问题的需要,大大降低了计算复杂度;基于云边端3层联邦学习综合利用了边缘节点距离终端的邻近优势,也利用了云计算中芯强大的计算资源,弥补了边缘节点计算资源不足的问题,通过在多个客户端各自训练一个本地模型用以预测卸载任务,通过周期性的在边缘端执行一次参数聚合形成一个全局模型,边缘执行周期性聚合后云端执行一次聚合,如此直到收敛形成一个全局BiLSTM模型,全局模型可以智能的对每一个卸载任务的信息量进行预测,从而更好为计算卸载和资源分配提供指导。

Figure 202110089708

The invention discloses a cloud-edge-terminal-based federated learning computing offloading resource allocation system and method, so as to realize accurate decision-making on computing task offloading and resource allocation, eliminate the need for solving combinatorial optimization problems, and greatly reduce computational complexity; The 3-layer federated learning based on the cloud, edge and terminal comprehensively utilizes the proximity advantages of edge nodes from the terminal, and also makes use of the powerful computing resources of cloud computing SMIC to make up for the problem of insufficient computing resources of edge nodes. The local model is used to predict the offloading task, and a global model is formed by periodically performing a parameter aggregation on the edge. After the edge performs periodic aggregation, the cloud performs an aggregation. In this way, a global BiLSTM model is formed until convergence. The global model can intelligently The amount of information of each offload task is predicted to better provide guidance for computing offload and resource allocation.

Figure 202110089708

Description

Translated fromChinese
一种基于云边端的联邦学习计算卸载计算系统及方法A cloud-edge-terminal-based federated learning computing offload computing system and method

技术领域technical field

本发明涉及了在5G网络驱动下的移动边缘计算网络的计算卸载和资源分配,具体涉及一种基于云边端的联邦学习计算卸载计算系统及方法。The invention relates to computing offloading and resource allocation of a mobile edge computing network driven by a 5G network, in particular to a cloud-edge-terminal-based federated learning computing offloading computing system and method.

背景技术Background technique

近年来,在物联网的普及推动下,网络边缘产生的数据呈现爆炸式增长。不能保证低延迟和位置感知削弱了传统云计算解决方案的能力。据IDC预测,截止2020年底有超过500亿终端和设备联网,其中超过50%的数据需要在网络边缘分析、处理与存储。而传统的“云端二体协同计算”模式已无法满足低时延与高带宽的需求。移动边缘计算(MEC)正在成为一种新的引人注目的计算范式,它推动云计算能力更接近终端用户,支持各种计算密集型但对延迟敏感的应用,如人脸识别、自然语言处理和互动游。MEC的关键功能之一是任务卸载(又名,计算卸载),它能够卸载计算密集型任务的移动应用程序从用户设备(UE)到MEC主机在网络边缘,从而打破移动设备的资源限制,拓展移动设备的计算能力、电池容量和存储能力等。虽然边缘服务器可以为终端提供云功能,但由于其固有的无线和计算能力有限,可能无法为所有终端提供服务。一方面,不确定的卸载任务数据量大小和时变的信道条件使计算卸载难以做出精准的决策。另一方面,分布式异构的边缘基础设施中卸载过程用户个人敏感信息等存在被截获和泄露的风险。In recent years, driven by the popularity of the Internet of Things, the data generated at the edge of the network has exploded. The inability to guarantee low latency and location awareness undermines the capabilities of traditional cloud computing solutions. According to IDC's forecast, by the end of 2020, there will be more than 50 billion terminals and devices connected to the Internet, and more than 50% of the data needs to be analyzed, processed and stored at the edge of the network. The traditional "cloud two-body collaborative computing" model can no longer meet the needs of low latency and high bandwidth. Mobile Edge Computing (MEC) is emerging as a new compelling computing paradigm that pushes cloud computing capabilities closer to end users, enabling a variety of compute-intensive but latency-sensitive applications such as face recognition, natural language processing and interactive tours. One of the key features of MEC is task offloading (aka, computational offloading), which enables offloading of computationally intensive tasks from mobile applications from user equipment (UE) to the MEC host at the network edge, thereby breaking the resource constraints of mobile devices and expanding Computational power, battery capacity, storage capacity, etc. of mobile devices. Although edge servers can provide cloud capabilities to terminals, they may not be able to provide services to all terminals due to their inherent limited wireless and computing capabilities. On the one hand, the uncertain data size of offloading tasks and the time-varying channel conditions make it difficult to make accurate decisions for computational offloading. On the other hand, in the distributed and heterogeneous edge infrastructure, there is a risk of interception and leakage of sensitive personal information of users during the uninstallation process.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提供一种基于云边端的联邦学习计算卸载计算系统及方法,以克服现有技术的不足。The purpose of the present invention is to provide a cloud-edge-terminal-based federated learning computing offload computing system and method to overcome the deficiencies of the prior art.

为达到上述目的,本发明采用如下技术方案:To achieve the above object, the present invention adopts the following technical solutions:

一种基于云边端的联邦学习计算卸载资源分配方法,包括以下步骤:A cloud-edge-terminal-based federated learning computing offloading resource allocation method, comprising the following steps:

S1,构建全局模型,并将初始化的全局模型和任务广播给任务选择的本地设备;S1, build a global model, and broadcast the initialized global model and task to the local device selected by the task;

S2,基于初始化的全局模型,每个本地设备分别使用其本地数据和本地设备参数更新迭代本地模型参数,当达到设定迭代次数后,将所有本地设备计算的梯度数据超过阈值的重要梯度对应的本地模型参数进行边缘参数聚合,根据边缘参数聚合结果参数更新全局模型参数后将更新后的全局模型反馈至各本地设备;S2, based on the initialized global model, each local device uses its local data and local device parameters to update the iterative local model parameters respectively. When the set number of iterations is reached, the gradient data calculated by all local devices exceeds the threshold corresponding to the important gradient. The local model parameters are aggregated with edge parameters, and the global model parameters are updated according to the result parameters of the edge parameter aggregation, and the updated global model is fed back to each local device;

S3,当边缘参数聚合达到设定聚合次数后,进行一次云端参数聚合;S3, when the edge parameter aggregation reaches the set number of aggregation times, perform a cloud parameter aggregation;

S4,重复步骤S2-S3直到全局损失函数收敛或者达到设定的训练精度,完成全局模型训练;S4, repeating steps S2-S3 until the global loss function converges or reaches the set training accuracy, and the global model training is completed;

S6,利用训练完成的全局模型对每一个计算卸载任务的信息量进行预测得到计算卸载的数据量的大小,根据计算卸载的数据量的大小以成本最小化进行资源分配。S6, using the trained global model to predict the information amount of each computing offloading task to obtain the size of the computing offloading data amount, and performing resource allocation to minimize the cost according to the size of the computing offloading data amount.

进一步的,本地设备分别使用其本地数据和本地设备参数更新本地模型参数:Further, the local device uses its local data and local device parameters to update the local model parameters, respectively:

Figure BDA0002911954130000021
Figure BDA0002911954130000021

t为当前迭代指标,i为第i个本地设备,在当前迭代指标t中本地设备i的目标是寻找使损失函数

Figure BDA0002911954130000022
最小的最优参数
Figure BDA0002911954130000023
t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss function
Figure BDA0002911954130000022
the smallest optimal parameter
Figure BDA0002911954130000023

进一步的,当达到设定迭代次数后,将所有本地设备计算的梯度数据超过阈值的重要梯度上传到边缘服务器。Further, when the set number of iterations is reached, all important gradients whose gradient data calculated by the local device exceeds the threshold are uploaded to the edge server.

进一步的,云参数聚合具体过程如下公式:Further, the specific process of cloud parameter aggregation is as follows:

Figure BDA0002911954130000031
Figure BDA0002911954130000031

{Dζ}表示边缘服务器ζ下聚集的数据集,卸载任务的数据集以

Figure BDA0002911954130000032
的形式分布在N个客户端上,其中|Di|,|D|分别表示本地训训练样本i和总的训练样本数量。{Dζ } represents the data set aggregated under the edge server ζ, and the data set of the offload task is represented by
Figure BDA0002911954130000032
The form of is distributed on N clients, where |Di |, |D| represent the local training sample i and the total number of training samples, respectively.

进一步的,迭代过程中进行参数稀疏化,采用标准的分散随机梯度下降方法,本地设备通过本地数据集Di更新本地参数

Figure BDA0002911954130000033
为Further, parameter sparseness is performed in the iterative process, the standard decentralized stochastic gradient descent method is used, and the local device updates the local parameters through the local data set Di .
Figure BDA0002911954130000033
for

Figure BDA0002911954130000034
Figure BDA0002911954130000034

t表示当前迭代,wt表示参数w在迭代t的值;f(x,wt)表示由输入数据x和当前参数wt计算的损失;gk,t表示节点k在t次迭代关于wt的梯度;sparse(gk,t)表示gk,t的稀疏梯度;其中η是学习率,

Figure BDA0002911954130000035
表示从客户端i的一个小批量数据样本
Figure BDA0002911954130000036
得到的梯度fi(w)的估计;即:t represents the current iteration, wt represents the value of parameter w at iteration t; f(x, wt ) represents the loss calculated by the input data x and the current parameter wt ; gk, t represents the node k in the t iteration with respect to w gradient oft ; sparse(gk,t ) represents the sparse gradient of gk,t ; where η is the learning rate,
Figure BDA0002911954130000035
represents a small batch of data samples from client i
Figure BDA0002911954130000036
The resulting estimate of the gradient fi (w); that is:

Figure BDA0002911954130000037
Figure BDA0002911954130000037

进一步的,在每次迭代t开始时,在工作节点k上,从当前参数wt和从本地数据块Bk,t采样的数据计算出损失f(x,wt)后,可求得梯度f(x,wt),令

Figure BDA0002911954130000038
f(x,wt),Bk,t为工作节点k的本地数据块,大小为b;按绝对值对每个参数的梯度元素进行排序,将计算的梯度数据超过阈值的重要梯度上传到边缘服务器。Further, at the beginning of each iteration t, on the worker node k, after calculating the loss f(x, wt ) from the current parameterwt and the data sampled from the local data block Bk, t, the gradient can be obtained. f(x,wt ), let
Figure BDA0002911954130000038
f(x, wt ), Bk, t is the local data block of the worker node k, the size is b; the gradient elements of each parameter are sorted by absolute value, and the important gradients whose calculated gradient data exceeds the threshold are uploaded to edge server.

进一步的,利用与资源分配约束相关联的对偶变量进行一维双截面搜索实现资源分配。Further, resource allocation is achieved by performing a one-dimensional dual cross-section search using dual variables associated with resource allocation constraints.

一种计算卸载资源分配系统,包括本地设备、边缘服务器和云端服务器;A computing offloading resource allocation system, including a local device, an edge server and a cloud server;

边缘服务器用于将初始化的全局模型和任务广播给任务选择的本地设备;本地设备基于初始化的全局模型,根据其自身本地数据和本地设备参数更新本地模型参数,并将更新后的本地模型反馈至边缘服务器;The edge server is used to broadcast the initialized global model and tasks to the local device selected by the task; based on the initialized global model, the local device updates the local model parameters according to its own local data and local device parameters, and feeds back the updated local model to the local device. edge server;

边缘服务器对收到不同本地设备反馈的本地模型进行参数聚合,并根据边缘参数聚合结果参数更新全局模型参数后将更新后的全局模型反馈至各本地设备,当边缘参数聚合达到设定聚合次数后,云端服务器根据边缘服务器聚合的全局模型进行一次云参数聚合;本地设备将任务输入至边缘服务器,边缘服务器利用最终的全局模型对每一个计算卸载任务的信息量进行预测得到计算卸载的数据量的大小,根据计算卸载的数据量的大小以成本最小化进行资源分配。The edge server aggregates the parameters of the local models that have received feedback from different local devices, updates the global model parameters according to the edge parameter aggregation result parameters, and feeds back the updated global model to each local device. When the edge parameter aggregation reaches the set aggregation times , the cloud server performs a cloud parameter aggregation according to the global model aggregated by the edge server; the local device inputs the task to the edge server, and the edge server uses the final global model to predict the information amount of each computing offload task to obtain the data volume of computing offloading. Size, according to the size of the amount of data offloaded by the calculation to minimize the cost of resource allocation.

进一步的,本地设备基于初始化的全局模型

Figure BDA0002911954130000041
根据其自身本地数据和本地设备参数更新本地模型参数
Figure BDA0002911954130000042
其中t为当前迭代指标,i为第i个本地设备,在当前迭代指标t中本地设备i的目标是寻找使损失函数
Figure BDA0002911954130000043
最小的最优参数,即:Further, the local device is based on the initialized global model
Figure BDA0002911954130000041
Update local model parameters based on its own local data and local device parameters
Figure BDA0002911954130000042
where t is the current iteration index, i is the i-th local device, and the goal of the local device i in the current iteration index t is to find the loss function
Figure BDA0002911954130000043
The smallest optimal parameters, namely:

Figure BDA0002911954130000044
Figure BDA0002911954130000044

当进行k1轮迭代学习后(即达到设定迭代次数后),将计算的梯度数据超过阈值的重要梯度上传到边缘服务器。After k1 rounds of iterative learning (that is, after reaching the set number of iterations), the important gradients whose calculated gradient data exceeds the threshold are uploaded to the edge server.

进一步的,一个云端服务器连接多个边缘服务器,每个边缘服务器连接若干本地设备,每个边缘服务器参数聚合与其连接的本地设备的本地模型参数;云端服务器云参数聚合与其连接的多个边缘服务器参数聚合后的全局模型参数。Further, a cloud server is connected to multiple edge servers, each edge server is connected to several local devices, and the parameters of each edge server aggregate the local model parameters of the local devices connected to it; the cloud parameters of the cloud server aggregate the parameters of multiple edge servers connected to it. Aggregated global model parameters.

与现有技术相比,本发明具有以下有益的技术效果:Compared with the prior art, the present invention has the following beneficial technical effects:

本发明一种基于云边端的联邦学习计算卸载资源分配方法,可以满足在不共享隐私数据的前提下对多方数据进行训练学习的现实需求,以实现对计算任务卸载和资源分配做出准确决策,消除了求解组合优化问题的需要,大大降低了计算复杂度;基于云边端3层联邦学习综合利用了边缘节点距离终端的邻近优势,也利用了云计算中心强大的计算资源,弥补了边缘节点计算资源不足的问题,通过在多个客户端各自训练一个本地模型用以预测卸载任务,通过周期性的在边缘端执行一次参数聚合形成一个全局模型,边缘执行周期性聚合后云端执行一次聚合,如此直到收敛形成一个全局BiLSTM模型,全局模型可以智能的对每一个卸载任务的信息量进行预测,从而更好为计算卸载和资源分配提供指导。The present invention is a cloud-edge-terminal-based federated learning computing offloading resource allocation method, which can meet the practical needs of training and learning on multi-party data without sharing private data, so as to realize accurate decision-making on computing task offloading and resource allocation, Eliminates the need to solve combinatorial optimization problems and greatly reduces computational complexity; based on cloud-edge-end 3-layer federated learning, it comprehensively utilizes the proximity advantages of edge nodes from terminals and the powerful computing resources of cloud computing centers to compensate for edge nodes. For the problem of insufficient computing resources, a local model is trained on multiple clients to predict offloading tasks, and a global model is formed by periodically performing a parameter aggregation on the edge. After the edge performs periodic aggregation, the cloud performs an aggregation. In this way, a global BiLSTM model is formed until convergence. The global model can intelligently predict the amount of information of each offloading task, thereby providing better guidance for computing offloading and resource allocation.

进一步的,为了优化联邦学习的通信量,在本地对各自拥有的数据集进行学习,在经过多轮训练后,将稀疏化的梯度前s%进行压缩上传至边缘参数服务器,边缘参数服务器经过轮聚合后将参数上传至云端服务器聚合,直至收敛,实现了接近集中式学习方法的精度下,有最大限度保护了用户的安全隐私。Further, in order to optimize the communication volume of federated learning, the data sets they own are learned locally. After multiple rounds of training, the first s% of the sparse gradients are compressed and uploaded to the edge parameter server. The edge parameter server passes rounds of training. After aggregation, the parameters are uploaded to the cloud server for aggregation until convergence, which achieves the accuracy close to the centralized learning method and maximizes the security and privacy of users.

进一步的,将上传的梯度所采用参数稀疏化,即每次只将重要梯度压缩后上传至中心服务器对全局模型进行优化合并,这大大降低了联邦学习客户端和服务器端的通信开销,近而更加高效加速模型的聚合,加快了模型收敛速度。Further, the parameters used in the uploaded gradients are sparse, that is, only the important gradients are compressed and uploaded to the central server to optimize and merge the global model, which greatly reduces the communication overhead between the federated learning client and the server, and is more convenient. Efficiently accelerates model aggregation and accelerates model convergence.

本发明一种计算卸载资源分配系统,采用云-边-端3层的联邦学习框架,该框架既利用了边缘服务器与终端节点的天然邻近性和实时性计算优势,也弥补了边缘服务器计算资源有限的缺点,使用基于Bi-LSTM的联邦学习机制,使参与计算卸载的终端设备本地训练一个BiLSTM模型预测任务的数据量大小,之后分别定期在云端和边端执行参数聚合,消除了求解组合优化问题的需要,从而大大降低了计算复杂度。The present invention is a computing offloading resource allocation system, which adopts a cloud-edge-terminal three-layer federated learning framework, which not only utilizes the natural proximity and real-time computing advantages of edge servers and terminal nodes, but also makes up for edge server computing resources. The limited disadvantage is that the Bi-LSTM-based federated learning mechanism is used to enable the terminal equipment participating in computing offload to locally train a BiLSTM model to predict the size of the data of the task, and then periodically perform parameter aggregation in the cloud and on the edge, eliminating the need for combination optimization. problems, thereby greatly reducing the computational complexity.

附图说明Description of drawings

图1是本发明实施例中基于Bi-LSTM的云-边-端联邦学习框架图。FIG. 1 is a framework diagram of the cloud-edge-device federated learning based on Bi-LSTM in an embodiment of the present invention.

图2是本发明实施例中Bi-LSTM预测任务预测图。FIG. 2 is a prediction diagram of a Bi-LSTM prediction task in an embodiment of the present invention.

图3是本发明实施例中云-边-端联邦学习顺序图。FIG. 3 is a sequence diagram of cloud-edge-device federated learning in an embodiment of the present invention.

图4是本发明实施例中两阶段求解优化图。FIG. 4 is a two-stage solution optimization diagram in an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明做进一步详细描述:Below in conjunction with accompanying drawing, the present invention is described in further detail:

一种基于云边端的联邦学习计算卸载资源分配方法,包括以下步骤:A cloud-edge-terminal-based federated learning computing offloading resource allocation method, comprising the following steps:

S1,构建全局模型,并将初始化的全局模型

Figure BDA0002911954130000061
和任务广播给任务选择的本地设备;S1, build the global model, and initialize the global model
Figure BDA0002911954130000061
and task broadcast to the local device selected by the task;

全局模型用于计算目标应用程序和相应的数据需求。The global model is used to calculate the target application and corresponding data requirements.

S2,基于初始化的全局模型

Figure BDA0002911954130000062
每个本地设备分别使用其本地数据和本地设备参数对本地设备收到的全局模型进行更新迭代,即更新本地模型参数
Figure BDA0002911954130000063
其中t为当前迭代指标,i为第i个本地设备,在当前迭代指标t中本地设备i的目标是寻找使损失函数
Figure BDA0002911954130000064
最小的最优参数,即:S2, the global model based on initialization
Figure BDA0002911954130000062
Each local device uses its local data and local device parameters to update and iterate the global model received by the local device, that is, update the local model parameters
Figure BDA0002911954130000063
where t is the current iteration index, i is the i-th local device, and the goal of the local device i in the current iteration index t is to find the loss function
Figure BDA0002911954130000064
The smallest optimal parameters, namely:

Figure BDA0002911954130000065
Figure BDA0002911954130000065

当进行k1轮迭代学习后,将计算的梯度数据超过阈值的重要梯度上传到边缘服务器;本地设备只传输绝对值超过梯度阈值的重要梯度,并在学习过程中本地累积其余梯度,即累计绝对值低于梯度阈值的梯度。After k1 rounds of iterative learning, upload the important gradients whose calculated gradient data exceeds the threshold to the edge server; the local device only transmits the important gradients whose absolute value exceeds the gradient threshold, and accumulates the remaining gradients locally during the learning process, that is, the cumulative absolute value Gradients with values below the gradient threshold.

S3,当本地设备执行k1轮迭代学习后,边缘服务器从每个本地设备i收集上传的本地模型参数

Figure BDA0002911954130000066
并将各本地设备的本地模型参数
Figure BDA0002911954130000067
汇总形成参数集,之后将汇总的参数集Wn=[w1,w2,...wM]执行边缘参数聚合,更新全局模型参数
Figure BDA0002911954130000071
并反馈回上传的本地模型参数的本地设备(即各本地设备);S3, when the local device performs k1 rounds of iterative learning, the edge server collects the uploaded local model parameters from each local device i
Figure BDA0002911954130000066
and combine the local model parameters of each local device
Figure BDA0002911954130000067
Aggregate to form a parameter set, and then perform edge parameter aggregation on the aggregated parameter set Wn = [w1 , w2 , ... wM ] to update the global model parameters
Figure BDA0002911954130000071
And feed back to the local device (ie each local device) of the uploaded local model parameters;

S4,当边缘服务器执行k2轮边缘参数聚合后,通过云端服务器执行一次云参数聚合;即本地客户端每k1轮迭代学习,边缘服务器执行k2轮边缘参数聚合后,云端服务器执行一次参数聚合;云端服务器每轮的参数更新计算

Figure BDA0002911954130000072
的过程如公式(1)所示:S4, after the edge server performs k2 rounds of edge parameter aggregation, the cloud server performs one cloud parameter aggregation; that is, after the local client performs k1 rounds of iterative learning, after the edge server performs k2 rounds of edge parameter aggregation, the cloud server performs one parameter aggregation Aggregation; parameter update calculation for each round of cloud server
Figure BDA0002911954130000072
The process is shown in formula (1):

Figure BDA0002911954130000073
Figure BDA0002911954130000073

{Dζ}表示每个边缘服务器ζ下聚集的数据集,卸载任务的数据集以

Figure BDA0002911954130000074
的形式分布在N个客户端上,其中|Di|,|D|分别表示本地训训练样本i和总的训练样本数量。其中,
Figure BDA0002911954130000075
这些分布式数据集不能被参数服务器直接访问。F(w)为全局损失,以本地数据集Di上的本地损失函数Fi(w)的加权平均形式计算。其中,F(w)和Fi(w):{Dζ } represents the dataset aggregated under each edge server ζ, and the dataset of offloading tasks is represented by
Figure BDA0002911954130000074
The form of is distributed on N clients, where |Di | and |D| represent the local training sample i and the total number of training samples, respectively. in,
Figure BDA0002911954130000075
These distributed datasets cannot be directly accessed by the parameter server. F(w) is the global loss, calculated as the weighted average of the local loss functions Fi (w) on the local dataset Di . where F(w) and Fi (w):

Figure BDA0002911954130000076
Figure BDA0002911954130000076

Figure BDA0002911954130000077
Figure BDA0002911954130000077

S5,重复步骤S2至S4,直到全局损失函数收敛或者达到设定的训练精度,完成全局模型

Figure BDA0002911954130000078
训练;S5, repeat steps S2 to S4 until the global loss function converges or reaches the set training accuracy, and the global model is completed
Figure BDA0002911954130000078
train;

S6,利用训练完成的全局模型

Figure BDA0002911954130000079
对每一个计算卸载任务的信息量进行预测得到计算卸载,根据计算卸载的数据量的大小,从而从更加精准的做出计算卸载和资源分配决策。S6, using the trained global model
Figure BDA0002911954130000079
Predict the amount of information of each computing offloading task to obtain computing offloading, and make more accurate computing offloading and resource allocation decisions according to the size of the data volume of computing offloading.

迭代过程中进行参数稀疏化,采用标准的分散随机梯度下降(decentralizedstochastic gradient descent,DSGD)方法,本地设备通过本地数据集Di更新本地参数

Figure BDA0002911954130000081
为:In the iterative process, the parameters are sparsed, using the standard decentralized stochastic gradient descent (DSGD) method, and the local device updates the local parameters through the local dataset Di
Figure BDA0002911954130000081
for:

Figure BDA0002911954130000082
Figure BDA0002911954130000082

t表示当前迭代,wt表示参数w在迭代t的值;f(x,wt)表示由输入数据x和当前参数wt计算的损失;本地数据集Di包括本地数据和本地设备参数;gk,t表示节点k在t次迭代关于wt的梯度。sparse(gk,t)表示gk,t的稀疏梯度;其中η是学习率,

Figure BDA0002911954130000083
表示从客户端i的一个小批量数据样本
Figure BDA0002911954130000084
得到的梯度fi(w)的估计;即:t represents the current iteration, wt represents the value of parameter w at iterationt ; f(x, wt) represents the loss calculated by the input data x and the current parameterwt ; the local data set Di includes local data and local device parameters; gk,t denotes the gradient of node k with respect to wt at t iterations. sparse(gk, t ) represents the sparse gradient of gk, t ; where η is the learning rate,
Figure BDA0002911954130000083
represents a small batch of data samples from client i
Figure BDA0002911954130000084
The resulting estimate of the gradient fi (w); that is:

Figure BDA0002911954130000085
Figure BDA0002911954130000085

如图1所示,在每次迭代t开始时,将移动设备视为N个分布式工作节点(workingnode),工作节点k(1≤k≤N)有他的本地数据块Bk,t,大小为b。在工作节点k上,从当前参数wt和从本地数据块Bk,t采样的数据计算出损失f(x,wt)后,可求得梯度f(x,wt)。令

Figure BDA0002911954130000086
f(x,wt).我们不传输完整的梯度gk,t,而是先确定压缩率s%,然后按绝对值对每个参数的梯度元素进行排序,在梯度的所有元素中,只有排在s%前面的元素在节点之间进行交换,s为梯度阈值;即将计算的梯度数据超过阈值的重要梯度上传到边缘服务器。这里用sparse(gk,t)表示稀疏梯度;剩余的梯度gk,t-sparse(gk,t)在本地积累,等待增长到足够大以进行交换。As shown in Figure 1, at the beginning of each iteration t, the mobile device is regarded as N distributed working nodes, and the working node k (1≤k≤N) has its local data block Bk,t , size is b. On the worker node k, the gradient f(x, wt ) can be obtained after calculating the loss f(x,wt ) from the current parameterwt and the data sampled from the local data block Bk,t. make
Figure BDA0002911954130000086
f(x,wt ). Instead of transmitting the full gradient gk,t , we first determine the compression ratio s%, and then sort the gradient elements of each parameter by absolute value, among all elements of the gradient, only Elements ranked in front of s% are exchanged between nodes, s is the gradient threshold; the important gradients whose calculated gradient data exceeds the threshold are uploaded to the edge server. Sparse gradients are denoted here by sparse(gk, t ); the remaining gradients gk, t - sparse(gk, t ) are accumulated locally, waiting to grow large enough to be swapped.

全局模型基于联邦学习,在全局模型预测的任务基础上,给定任务的数据输入大小

Figure BDA0002911954130000087
之后,原优化问题(P1)可简化为凸问题(P2)的资源分配问题,凸问题(P2)的最优时间分配{a,p}可以有效地得到解决,例如,在O(N)复杂度下,利用与资源分配约束相关联的对偶变量进行一维双截面搜索。The global model is based on federated learning, and the data input size for a given task is based on the task predicted by the global model
Figure BDA0002911954130000087
After that, the original optimization problem (P1) can be reduced to the resource allocation problem of the convex problem (P2), and the optimal time allocation {a, p} of the convex problem (P2) can be solved efficiently, for example, in O(N) complex In degrees, a one-dimensional dual-section search is performed using dual variables associated with resource allocation constraints.

如图1所示,一种基于云边端的联邦学习计算卸载计算系统,包括本地设备、边缘服务器和云端服务器,As shown in Figure 1, a cloud-edge-based federated learning computing offload computing system includes local devices, edge servers, and cloud servers.

边缘服务器用于将初始化的全局模型

Figure BDA0002911954130000091
和任务广播给任务选择的本地设备;本地设备基于初始化的全局模型
Figure BDA0002911954130000092
根据其自身本地数据和本地设备参数更新本地模型参数,并将更新后的本地模型反馈至边缘服务器;The edge server is used to initialize the global model
Figure BDA0002911954130000091
and tasks are broadcast to the local device selected by the task; the local device is based on the initialized global model
Figure BDA0002911954130000092
Update the local model parameters according to its own local data and local device parameters, and feed back the updated local model to the edge server;

边缘服务器对收到不同本地设备反馈的本地模型进行参数聚合,并根据边缘参数聚合结果参数更新全局模型参数后将更新后的全局模型反馈至各本地设备,当边缘参数聚合达到设定聚合次数后,云端服务器根据边缘服务器聚合的全局模型进行一次云参数聚合;本地设备将任务输入至边缘服务器,边缘服务器利用最终的全局模型对每一个计算卸载任务的信息量进行预测得到计算卸载的数据量的大小,根据计算卸载的数据量的大小以成本最小化进行资源分配。The edge server aggregates the parameters of the local models that have received feedback from different local devices, updates the global model parameters according to the edge parameter aggregation result parameters, and feeds back the updated global model to each local device. When the edge parameter aggregation reaches the set aggregation times , the cloud server performs a cloud parameter aggregation according to the global model aggregated by the edge server; the local device inputs the task to the edge server, and the edge server uses the final global model to predict the information amount of each computing offload task to obtain the data volume of computing offloading. Size, according to the size of the amount of data offloaded by the calculation to minimize the cost of resource allocation.

本地设备基于初始化的全局模型

Figure BDA0002911954130000093
根据其自身本地数据和本地设备参数更新本地模型参数
Figure BDA0002911954130000094
其中t为当前迭代指标,i为第i个本地设备,在当前迭代指标t中本地设备i的目标是寻找使损失函数
Figure BDA0002911954130000095
最小的最优参数,即:The local device is based on the initialized global model
Figure BDA0002911954130000093
Update local model parameters based on its own local data and local device parameters
Figure BDA0002911954130000094
where t is the current iteration index, i is the i-th local device, and the goal of the local device i in the current iteration index t is to find the loss function
Figure BDA0002911954130000095
The smallest optimal parameters, namely:

Figure BDA0002911954130000096
Figure BDA0002911954130000096

当进行k1轮迭代学习后(即达到设定迭代次数后),将计算的梯度数据超过阈值的重要梯度上传到边缘服务器;本地设备只传输绝对值超过梯度阈值的重要梯度,并在学习过程中本地累积其余梯度,即累计绝对值低于梯度阈值的梯度。After performing k1 rounds of iterative learning (that is, after reaching the set number of iterations), upload the important gradients whose calculated gradient data exceeds the threshold to the edge server; the local device only transmits the important gradients whose absolute value exceeds the gradient threshold, and the important gradients whose absolute value exceeds the gradient threshold are transmitted during the learning process. The remaining gradients are accumulated locally in , that is, the gradients whose absolute value is lower than the gradient threshold.

当本地设备执行k1轮迭代学习后,边缘服务器从每个本地设备i收集上传的本地模型参数

Figure BDA0002911954130000097
并将各本地设备的本地模型参数
Figure BDA0002911954130000098
汇总形成参数集,之后将汇总的参数集Wn=[w1,w2,...wM]执行边缘参数聚合,更新全局模型参数
Figure BDA0002911954130000101
并反馈回上传的本地模型参数的本地设备(即各本地设备);After the local device performs k1 rounds of iterative learning, the edge server collects the uploaded local model parameters from each local device i
Figure BDA0002911954130000097
and combine the local model parameters of each local device
Figure BDA0002911954130000098
Aggregate to form a parameter set, and then perform edge parameter aggregation on the aggregated parameter set Wn = [w1 , w2 , ... wM ] to update the global model parameters
Figure BDA0002911954130000101
And feed back to the local device (ie each local device) of the uploaded local model parameters;

当边缘服务器执行k2轮边缘参数聚合后,通过云端服务器执行一次云参数聚合;即本地客户端每k1轮迭代学习,边缘服务器执行k2轮边缘参数聚合后,云端服务器对边缘服务器参数聚合后的数据进行一次参数聚合;云端服务器每轮的参数更新计算

Figure BDA0002911954130000102
的过程如公式(1)所示:After the edge server performs k2 rounds of edge parameter aggregation, the cloud server performs one cloud parameter aggregation; that is, the local client performs k1 rounds of iterative learning, and after the edge server performs k2 rounds of edge parameter aggregation, the cloud server aggregates the edge server parameters. After the data is aggregated, the parameters are aggregated once; the parameter update calculation of each round of the cloud server
Figure BDA0002911954130000102
The process is shown in formula (1):

Figure BDA0002911954130000103
Figure BDA0002911954130000103

直到全局损失函数收敛或者达到设定的训练精度,完成全局模型

Figure BDA0002911954130000104
训练。Until the global loss function converges or reaches the set training accuracy, the global model is completed
Figure BDA0002911954130000104
train.

一个云端服务器连接L个边缘服务器,每个边缘服务器用ζ表示,每个边缘服务器所属的客户端集合为

Figure BDA0002911954130000105
{Dζ}表示每个边缘服务器ζ下聚集的数据集,每个边缘服务器聚合来自它的客户端的本地模型参数。A cloud server is connected to L edge servers, each edge server is represented by ζ, and the client set to which each edge server belongs is
Figure BDA0002911954130000105
{Dζ } denotes the dataset aggregated under each edge server ζ, each edge server aggregates local model parameters from its clients.

利用训练完成的全局模型

Figure BDA0002911954130000106
对每一个计算卸载任务的信息量进行预测得到计算卸载,根据计算卸载的数据量的大小,从而从更加精准的做出计算卸载和资源分配决策。Leverage the trained global model
Figure BDA0002911954130000106
Predict the amount of information of each computing offloading task to obtain computing offloading, and make more accurate computing offloading and resource allocation decisions according to the size of the data volume of computing offloading.

用户输入(即通过本地设备输入):基于联邦学习算法的用户输入是一个本地数据和本地设备参数样本Xm,它包括每个用户在不同时间段历史请求的任务的数据大小;令

Figure BDA0002911954130000107
其中Km为用户m的数据样本数,对于每个数据样本
Figure BDA0002911954130000108
Figure BDA0002911954130000109
其中,
Figure BDA00029119541300001010
是用户当前时刻的位置。User input (i.e. through local device input): The user input based on the federated learning algorithm is a local data and local device parameter sample Xm , which includes the data size of the tasks historically requested by each user in different time periods; let
Figure BDA0002911954130000107
where Km is the number of data samples of user m, for each data sample
Figure BDA0002911954130000108
Figure BDA0002911954130000109
in,
Figure BDA00029119541300001010
is the current position of the user.

用户输出(即通过本地设备输出):经过本地设备更新迭代的本地模型,即训练的本地Bi-LSTM模型(本地模型)输出是一个向量wm,表示与本地设备确定用户m的任务数据量大小信息的Bi-LSTM模型相关参数;User output (ie, output through the local device): The local model updated and iteratively updated by the local device, that is, the output of the trained local Bi-LSTM model (local model) is a vector wm , indicating the task data size of user m determined with the local device Information about the parameters of the Bi-LSTM model;

边缘服务器输入:基于Bi-LSTM的联邦学习算法将矩阵Wn=[w1,...,wm]作为输入,其中,wm是接受自用户m的模型参数;Edge server input: the Bi-LSTM-based federated learning algorithm takes the matrix Wn = [w1 , . . . , wm ] as input, where wm is the model parameter accepted from user m;

边缘服务器输出:通过汇总接收到的每个客户端的梯度数据,执行参数聚合形成一个全局模型,并将聚合后的更新参数发给每个客户端,客户端收到参数后覆盖本地模型参数,进行下一轮迭代;Edge server output: By aggregating the received gradient data of each client, performing parameter aggregation to form a global model, and sending the aggregated update parameters to each client. After the client receives the parameters, the local model parameters are overwritten, and the the next iteration;

即每个客户端MU i执行k2次本地模型更新,每个边缘服务器聚集与其连接的客户的模型;之后每k2次边缘模型聚合,则云服务器聚集所有边缘服务器的模型,这意味着每k1k2本地更新后执行一次与云通信。如时序图所示,所提出的云边端联邦学习算法主要步骤如下:That is, each client MU i performs k2 local model updates, and each edge server aggregates the models of the clients connected to it; then every k2 edge model aggregations, the cloud server aggregates the models of all edge servers, which means that each edge server aggregates the models of all edge servers. k1 k2 Perform a communication with the cloud after a local update. As shown in the sequence diagram, the main steps of the proposed cloud-edge-device federated learning algorithm are as follows:

图2是Bi-LSTM预测任务预测,目的是为计算写在任务进行预测:Figure 2 is the Bi-LSTM prediction task prediction, the purpose is to predict the task written for the calculation:

设有N个移动用户的计算任务需要卸载到与其蜂窝网络相关联的边缘服务器进行处理;采用基于Bi-LSTM的深度学习算法对计算任务进行预测。如图2所示,在给定时间步T时刻的输入xt条件下,BiLSTM单元的隐层输出Ht可有以下公式计算得出:The computing tasks with N mobile users need to be offloaded to the edge server associated with their cellular network for processing; the Bi-LSTM-based deep learning algorithm is used to predict the computing tasks. As shown in Figure 2, under the condition of input xt at a given time step T, the hidden layer output Ht of the BiLSTM unit can be calculated by the following formula:

gt=tanh*(WxgVt+Whght-1+bg)gt =tanh*(Wxg Vt +Whg ht-1 +bg )

it=sigmoid*(WxiVt+Whiht-1+bi)it =sigmoid*(Wxi Vt +Whi ht-1 +bi )

ft=sigmoid*(WxfVt+Whfht-1+bf)ft =sigmoid*(Wxf Vt +Whf ht-1 +bf )

ot=sigmoid*(WxoVt+Whoht-1+bo)ot =sigmoid*(Wxo Vt +Who ht-1 +bo )

Figure BDA0002911954130000111
Figure BDA0002911954130000111

ht=ottanh(Ct)=sigmoid*(WxoVt+Whoht-1+bo)*tanh(Ct)ht =ot tanh(Ct )=sigmoid*(Wxo Vt +Who ht-1 +bo )*tanh(Ct )

其中i,f,o,c分别表示输入门、遗忘门、输出门和细胞状态向量,W代表各权重矩阵(例如Wix为输入到输入门间的权重矩阵),xt代表各时刻的模型输入值,b代表偏置项向量。由于sigmod函数的输入值为[0,1],可以作为平衡信息被“遗忘”或者“记忆”程度的指标,因此门限单元都以此作为激活函数;where i, f, o, and c represent the input gate, forget gate, output gate and cell state vector respectively, W represents each weight matrix (for example, Wix is the weight matrix input to the input gate), and xt represents the model at each moment Input value, b represents the bias term vector. Since the input value of the sigmod function is [0,1], it can be used as an indicator of the degree to which the balance information is "forgotten" or "remembered", so the threshold units use this as the activation function;

最后,最后一个完整的连接层集成了前面提取的特征,得到了输出序列

Figure BDA0002911954130000121
其中,
Figure BDA0002911954130000122
代表预测的计算任务l的数据量大小。在此得到的预测数据大小
Figure BDA0002911954130000123
将用于后续的卸载策略计算。因此,算法的优化目标是尽可能多的提高预测任务的输入数据大小精度
Figure BDA0002911954130000124
Finally, the last full connection layer integrates the previously extracted features to get the output sequence
Figure BDA0002911954130000121
in,
Figure BDA0002911954130000122
Represents the data size of the predictedcomputing task 1. Predicted data size obtained here
Figure BDA0002911954130000123
Will be used for subsequent offload policy calculations. Therefore, the optimization goal of the algorithm is to improve the accuracy of the input data size of the prediction task as much as possible.
Figure BDA0002911954130000124

图3是云-边-端联邦学习顺序图,可以形象的描述了整个云边端联邦学习方法的交互过程,由图3可知,即每个客户端MU i执行k1次本地模型更新,每个边缘服务器聚集其客户的模型;之后每k2次边缘模型聚合,则云服务器聚集所有边缘服务器的模型,这意味着每k1k2本地更新后执行一次与云通信。Figure 3 is the cloud-edge-terminal federated learning sequence diagram, which can vividly describe the interaction process of the entire cloud-edge-terminal federated learning method. It can be seen from Figure 3 that each client MU i performs k1 local model updates, and each Each edge server aggregates the models of its clients; then every k2 edge model aggregations, the cloud server aggregates the models of all edge servers, which means that every k1 k2 local updates are performed after a communication with the cloud.

最后,图4是一个流程图,与现有的许多深度学习方法同时优化所有系统参数从而产生不可行解不同,本申请提出了一种基于智能任务预测和资源分配的两阶段优化方案,即将复杂优化问题分解为智能任务预测,之后通过预测的任务信息进行精准的计算卸载决策和资源分配。因此,它完全消除了解决复杂的MIP(mixed integer programming)问题需要,计算复杂度不会随着网络规模的增加而爆炸。Finally, Figure 4 is a flowchart. Unlike many existing deep learning methods that simultaneously optimize all system parameters to generate infeasible solutions, this application proposes a two-stage optimization scheme based on intelligent task prediction and resource allocation, which is about complex The optimization problem is decomposed into intelligent task prediction, and then accurate computing offloading decisions and resource allocation are made through the predicted task information. Therefore, it completely eliminates the need to solve complex MIP (mixed integer programming) problems, and the computational complexity does not explode as the network size increases.

实施例:Example:

首先在每一个本地设备(客户端)利用历史卸载任务训练一个BiLSTM模型,通过在边缘服务器和云端服务器聚合形成一个全局模型;当下一次任务新的卸载任务到来时,采用聚合形成的全局模型对任务进行预测,预测的输出作为计算卸载决策和资源分配的指导,在训练过程中我们把每次的梯度数据通过数据稀疏化方法进行压缩上传,从而大大降低了通信开销,加快模型收敛和计算决策与资源分配的复杂度。First, a BiLSTM model is trained on each local device (client) using historical offloading tasks, and a global model is formed by aggregating on the edge server and cloud server; when a new offloading task for the next task arrives, the global model formed by the aggregation is used to analyze the task. Make predictions, and the predicted output is used as a guide for computing offloading decisions and resource allocation. During the training process, we compress and upload each gradient data through the data sparsification method, which greatly reduces communication overhead and accelerates model convergence and computing decision making. The complexity of resource allocation.

本发明通过建立一套完整的模型训练预测到通信优化方法,最后可以快速求解计算卸载和资源分配。我们考虑的框架可对应当前5G驱动的MEC网络下一个静态的物联网网络,该网络的发射机和接收机固定在某个位置。以N=30的MEC网络为例,我们设计的BiFOS算法的收敛时间平均为0.061秒,这对于现场部署来说是可以接受的开销。因此BiFOS算法使得信道衰落环境下无线MEC网络的实时卸载和资源分配成为切实可行的。The invention predicts a communication optimization method by establishing a complete set of model training, and finally can quickly solve the calculation offloading and resource allocation. The framework we consider can correspond to a static IoT network under the current 5G-driven MEC network, where the transmitter and receiver are fixed in a certain location. Taking the MEC network with N=30 as an example, the convergence time of our designed BiFOS algorithm is 0.061 seconds on average, which is an acceptable overhead for field deployment. Therefore, the BiFOS algorithm makes the real-time offloading and resource allocation of wireless MEC networks feasible in the channel fading environment.

本发明公开了一种基于云边端的联邦学习计算卸载和资源分配方法,我们首先提出一种基于BiLSTM的联邦学习智能任务预测机制,每个参与计算的边服务器在本地独立地进行训练模型,无需上传数据到边缘服务器。然后定期在边端和云端进行参数聚合,构建的目的是共同训练一个通用的全局Bi-directional Long Short-Term Memory(BiLSTM)模型来预测计算任务的数据量信息等,从而更加精准的指导计算卸载决策和资源分配。该机制消除了求解组合优化问题的需要,大大降低了计算复杂度,特别是在大型网络中。并且该机制确保在分布式异构的边缘基础设施中参与卸载过程的用户个人敏感信息等不被截获和泄露。为了进一步减少联邦学习在模型优化过程中的网络通信开销,我们改进FAVG算法,设计了云-边-端3层的联邦学习框架,将上传的梯度采用参数稀疏化,即每次只将重要梯度压缩后上传至参数服务器。该框架综合利用了边缘服务器与终端设备的邻近性优势和云计算中心强大计算资源,弥补了边缘服务器计算资源不足的缺点。最后,实验结果表明,在不收集用户私有数据的情况下,我们的算法在预测精度优于其他基于学习的卸载算法,且可减少30%的能效。The present invention discloses a cloud-edge-terminal-based federated learning computing offloading and resource allocation method. We first propose a BiLSTM-based federated learning intelligent task prediction mechanism. Each edge server participating in the calculation independently trains the model locally, without the need for Upload data to the edge server. Then, the parameters are aggregated on the edge and the cloud on a regular basis. The purpose of the construction is to jointly train a general global Bi-directional Long Short-Term Memory (BiLSTM) model to predict the data volume information of computing tasks, etc., so as to more accurately guide computing offloading decision-making and resource allocation. This mechanism eliminates the need to solve combinatorial optimization problems and greatly reduces computational complexity, especially in large networks. And this mechanism ensures that sensitive personal information of users participating in the uninstallation process in the distributed and heterogeneous edge infrastructure will not be intercepted and leaked. In order to further reduce the network communication overhead of federated learning in the process of model optimization, we improved the FAVG algorithm and designed a three-layer federated learning framework of cloud-edge-end. The uploaded gradients are parameter sparse, that is, only important gradients are used for each time. Compressed and uploaded to the parameter server. This framework comprehensively utilizes the proximity advantages of edge servers and terminal devices and the powerful computing resources of cloud computing centers, making up for the shortcomings of insufficient computing resources of edge servers. Finally, experimental results show that without collecting user private data, our algorithm outperforms other learning-based offloading algorithms in prediction accuracy and can reduce energy efficiency by 30%.

Claims (10)

Translated fromChinese
1.一种基于云边端的联邦学习计算卸载资源分配方法,其特征在于,包括以下步骤:1. A method for allocating resources for federated learning computing offloading based on cloud edge, is characterized in that, comprises the following steps:S1,构建全局模型,并将初始化的全局模型和任务广播给任务选择的本地设备;S1, build a global model, and broadcast the initialized global model and task to the local device selected by the task;S2,基于初始化的全局模型,每个本地设备分别使用其本地数据和本地设备参数更新迭代本地模型参数,当达到设定迭代次数后,将所有本地设备计算的梯度数据超过阈值的重要梯度对应的本地模型参数进行边缘参数聚合,根据边缘参数聚合结果参数更新全局模型参数后将更新后的全局模型反馈至各本地设备;S2, based on the initialized global model, each local device uses its local data and local device parameters to update the iterative local model parameters respectively. When the set number of iterations is reached, the gradient data calculated by all local devices exceeds the threshold corresponding to the important gradient. The local model parameters are aggregated with edge parameters, and the global model parameters are updated according to the result parameters of the edge parameter aggregation, and the updated global model is fed back to each local device;S3,当边缘参数聚合达到设定聚合次数后,进行一次云参数聚合;S3, when the edge parameter aggregation reaches the set number of aggregation times, perform a cloud parameter aggregation;S4,重复步骤S2-S3直到全局损失函数收敛或者达到设定的训练精度,完成全局模型训练;S4, repeating steps S2-S3 until the global loss function converges or reaches the set training accuracy, and the global model training is completed;S6,利用训练完成的全局模型对每一个计算卸载任务的信息量进行预测得到计算卸载的数据量的大小,根据计算卸载的数据量的大小以成本最小化进行资源分配。S6, using the trained global model to predict the information amount of each computing offloading task to obtain the size of the computing offloading data amount, and performing resource allocation to minimize the cost according to the size of the computing offloading data amount.2.根据权利要求1所述的一种基于云边端的联邦学习计算卸载资源分配方法,其特征在于,本地设备分别使用其本地数据和本地设备参数更新本地模型参数:2. a kind of cloud edge-based federated learning computing offloading resource allocation method according to claim 1, is characterized in that, local device uses its local data and local device parameter respectively to update local model parameter:
Figure FDA0002911954120000011
Figure FDA0002911954120000011
t为当前迭代指标,i为第i个本地设备,在当前迭代指标t中本地设备i的目标是寻找使损失函数
Figure FDA0002911954120000012
最小的最优参数
Figure FDA0002911954120000013
t is the current iteration index, i is the ith local device, and the goal of the local device i in the current iteration index t is to find the loss function
Figure FDA0002911954120000012
the smallest optimal parameter
Figure FDA0002911954120000013
3.根据权利要求1所述的一种基于云边端的联邦学习计算卸载资源分配方法,其特征在于,当达到设定迭代次数后,将所有本地设备计算的梯度数据超过阈值的重要梯度上传到边缘服务器。3. A cloud-edge-terminal-based federated learning computing offloading resource allocation method according to claim 1, characterized in that, when the set number of iterations is reached, the important gradients whose gradient data calculated by all local devices exceeds the threshold are uploaded to edge server.4.根据权利要求1所述的一种基于云边端的联邦学习计算卸载资源分配方法,其特征在于,云参数聚合具体过程如下公式:4. The method for allocating resources for federated learning computing offloading based on cloud-edge-terminal according to claim 1, characterized in that, the specific process of cloud parameter aggregation is as follows:
Figure FDA0002911954120000021
Figure FDA0002911954120000021
{Dζ}表示边缘服务器ζ下聚集的数据集,卸载任务的数据集以
Figure FDA0002911954120000022
的形式分布在N个客户端上,其中|Di|,|D|分别表示本地训训练样本i和总的训练样本数量。
{Dζ } represents the data set aggregated under the edge server ζ, and the data set of the offload task is represented by
Figure FDA0002911954120000022
The form of is distributed on N clients, where |Di | and |D| represent the local training sample i and the total number of training samples, respectively.
5.根据权利要求1所述的一种基于云边端的联邦学习计算卸载资源分配方法,其特征在于,迭代过程中进行参数稀疏化,采用标准的分散随机梯度下降方法,本地设备通过本地数据集Di更新本地参数
Figure FDA0002911954120000023
5. A cloud-edge-terminal-based federated learning computing offloading resource allocation method according to claim 1, characterized in that, parameter sparseness is performed in the iterative process, a standard decentralized stochastic gradient descent method is adopted, and the local device passes the local data set. Di update local parameters
Figure FDA0002911954120000023
for
Figure FDA0002911954120000024
Figure FDA0002911954120000024
t表示当前迭代,wt表示参数w在迭代t的值;f(x,wt)表示由输入数据x和当前参数wt计算的损失;gk,t表示节点k在t次迭代关于wt的梯度;sparse(gk,t)表示gk,t的稀疏梯度;其中η是学习率,
Figure FDA0002911954120000025
表示从客户端i的一个小批量数据样本
Figure FDA0002911954120000026
得到的梯度fi(w)的估计;即:
Figure FDA0002911954120000027
t represents the current iteration, wt represents the value of parameter w at iteration t; f(x, wt ) represents the loss calculated by the input data x and the current parameter wt ; gk, t represents the node k at iteration t with respect to w gradient oft ; sparse(gk, t ) represents the sparse gradient of gk, t ; where η is the learning rate,
Figure FDA0002911954120000025
represents a small batch of data samples from client i
Figure FDA0002911954120000026
The resulting estimate of the gradient fi (w); that is:
Figure FDA0002911954120000027
6.根据权利要求5所述的一种基于云边端的联邦学习计算卸载资源分配方法,其特征在于,在每次迭代t开始时,在工作节点k上,从当前参数wt和从本地数据块Bk,t采样的数据计算出损失f(x,wt)后,可求得梯度f(x,wt),令
Figure FDA0002911954120000028
f(x,wt),Bk,t为工作节点k的本地数据块,大小为b;按绝对值对每个参数的梯度元素进行排序,将计算的梯度数据超过阈值的重要梯度上传到边缘服务器。
6. A cloud-edge-based federated learning computing offloading resource allocation method according to claim 5, characterized in that, at the beginning of each iteration t, on the working node k, from the current parameterwt and from the local data After the loss f(x, wt ) is calculated from the data sampled by the blocks Bk and t , the gradient f(x, wt ) can be obtained, and let
Figure FDA0002911954120000028
f(x, wt ), Bk, t is the local data block of the worker node k, the size is b; the gradient elements of each parameter are sorted by absolute value, and the important gradients whose calculated gradient data exceeds the threshold are uploaded to edge server.
7.根据权利要求1所述的一种基于云边端的联邦学习计算卸载资源分配方法,其特征在于,利用与资源分配约束相关联的对偶变量进行一维双截面搜索实现资源分配。7 . The cloud-edge-terminal-based federated learning computing offloading resource allocation method according to claim 1 , wherein the resource allocation is realized by using dual variables associated with resource allocation constraints to perform a one-dimensional dual-section search. 8 .8.一种基于权利要求1所述的基于云边端的联邦学习计算卸载资源分配方法的计算卸载资源分配系统,其特征在于,包括本地设备、边缘服务器和云端服务器;8. A computing offloading resource allocation system based on the cloud-edge-based federated learning computing offloading resource allocation method according to claim 1, characterized in that, comprising a local device, an edge server and a cloud server;边缘服务器用于将初始化的全局模型和任务广播给任务选择的本地设备;本地设备基于初始化的全局模型,根据其自身本地数据和本地设备参数更新本地模型参数,并将更新后的本地模型反馈至边缘服务器;The edge server is used to broadcast the initialized global model and tasks to the local device selected by the task; based on the initialized global model, the local device updates the local model parameters according to its own local data and local device parameters, and feeds back the updated local model to the local device. edge server;边缘服务器对收到不同本地设备反馈的本地模型进行参数聚合,并根据边缘参数聚合结果参数更新全局模型参数后将更新后的全局模型反馈至各本地设备,当边缘参数聚合达到设定聚合次数后,云端服务器根据边缘服务器聚合的全局模型进行一次云参数聚合;本地设备将任务输入至边缘服务器,边缘服务器利用最终的全局模型对每一个计算卸载任务的信息量进行预测得到计算卸载的数据量的大小,根据计算卸载的数据量的大小以成本最小化进行资源分配。The edge server aggregates the parameters of the local models that have received feedback from different local devices, updates the global model parameters according to the edge parameter aggregation result parameters, and feeds back the updated global model to each local device. When the edge parameter aggregation reaches the set aggregation times , the cloud server performs a cloud parameter aggregation according to the global model aggregated by the edge server; the local device inputs the task to the edge server, and the edge server uses the final global model to predict the information amount of each computing offload task to obtain the data volume of computing offloading. Size, according to the size of the amount of data offloaded by the calculation to minimize the cost of resource allocation.9.根据权利要求8所述的计算卸载资源分配系统,其特征在于,本地设备基于初始化的全局模型
Figure FDA0002911954120000031
根据其自身本地数据和本地设备参数更新本地模型参数
Figure FDA0002911954120000032
其中t为当前迭代指标,i为第i个本地设备,在当前迭代指标t中本地设备i的目标是寻找使损失函数
Figure FDA0002911954120000033
最小的最优参数,即:
9. The computing offload resource allocation system according to claim 8, wherein the local device is based on an initialized global model
Figure FDA0002911954120000031
Update local model parameters based on its own local data and local device parameters
Figure FDA0002911954120000032
where t is the current iteration index, i is the i-th local device, and the goal of the local device i in the current iteration index t is to find the loss function
Figure FDA0002911954120000033
The smallest optimal parameters, namely:
Figure FDA0002911954120000034
Figure FDA0002911954120000034
当进行k1轮迭代学习后(即达到设定迭代次数后),将计算的梯度数据超过阈值的重要梯度上传到边缘服务器。After k1 rounds of iterative learning (that is, after reaching the set number of iterations), the important gradients whose calculated gradient data exceeds the threshold are uploaded to the edge server.
10.根据权利要求8所述的计算卸载资源分配系统,其特征在于,一个云端服务器连接多个边缘服务器,每个边缘服务器连接若干本地设备,每个边缘服务器参数聚合与其连接的本地设备的本地模型参数;云端服务器云参数聚合与其连接的多个边缘服务器参数聚合后的全局模型参数。10. The computing offload resource allocation system according to claim 8, wherein a cloud server is connected to a plurality of edge servers, each edge server is connected to a number of local devices, and the parameters of each edge server are aggregated to the local device of the local device connected to it. Model parameters; global model parameters after the cloud server cloud parameter aggregation is aggregated with the parameters of multiple edge servers connected to it.
CN202110089708.9A2021-01-222021-01-22Cloud-side-based federated learning calculation unloading computing system and methodPendingCN112817653A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110089708.9ACN112817653A (en)2021-01-222021-01-22Cloud-side-based federated learning calculation unloading computing system and method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110089708.9ACN112817653A (en)2021-01-222021-01-22Cloud-side-based federated learning calculation unloading computing system and method

Publications (1)

Publication NumberPublication Date
CN112817653Atrue CN112817653A (en)2021-05-18

Family

ID=75858849

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110089708.9APendingCN112817653A (en)2021-01-222021-01-22Cloud-side-based federated learning calculation unloading computing system and method

Country Status (1)

CountryLink
CN (1)CN112817653A (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113191504A (en)*2021-05-212021-07-30电子科技大学Federated learning training acceleration method for computing resource heterogeneity
CN113255934A (en)*2021-06-072021-08-13大连理工大学Federal learning method and system for sensing network uncertainty in mobile edge cloud
CN113312180A (en)*2021-06-072021-08-27北京大学Resource allocation optimization method and system based on federal learning
CN113361694A (en)*2021-06-302021-09-07哈尔滨工业大学Layered federated learning method and system applying differential privacy protection
CN113408675A (en)*2021-08-202021-09-17深圳市沃易科技有限公司Intelligent unloading optimization method and system based on federal learning
CN113435472A (en)*2021-05-242021-09-24西安电子科技大学Vehicle-mounted computing power network user demand prediction method, system, device and medium
CN113467928A (en)*2021-05-202021-10-01杭州趣链科技有限公司Block chain decentralization-based federated learning member reasoning attack defense method and device
CN113469367A (en)*2021-05-252021-10-01华为技术有限公司Method, device and system for federated learning
CN113504999A (en)*2021-08-052021-10-15重庆大学Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN113610303A (en)*2021-08-092021-11-05北京邮电大学Load prediction method and system
CN113642700A (en)*2021-07-052021-11-12湖南师范大学 Cross-platform multimodal public opinion analysis method based on federated learning and edge computing
CN113761525A (en)*2021-09-072021-12-07广东电网有限责任公司江门供电局Intelligent intrusion detection method and system based on federal learning
CN113839838A (en)*2021-10-202021-12-24西安电子科技大学Business type identification method for federal learning based on cloud edge cooperation
CN113852692A (en)*2021-09-242021-12-28中国移动通信集团陕西有限公司Service determination method, device, equipment and computer storage medium
CN113873047A (en)*2021-12-032021-12-31江苏电力信息技术有限公司Cooperative computing method for streaming data
CN113971090A (en)*2021-10-212022-01-25中国人民解放军国防科技大学 Hierarchical federated learning method and device for distributed deep neural network
CN114040425A (en)*2021-11-172022-02-11中国电信集团系统集成有限责任公司Resource allocation method based on global resource availability optimization
CN114116198A (en)*2021-10-212022-03-01西安电子科技大学 Asynchronous federated learning method, system, device and terminal for moving vehicles
CN114118437A (en)*2021-09-302022-03-01电子科技大学 A model update synchronization method for distributed machine learning in micro-cloud
CN114125861A (en)*2021-11-262022-03-01北京邮电大学 A wireless federated learning method and device
CN114143212A (en)*2021-11-262022-03-04天津大学Social learning method for smart city
CN114282646A (en)*2021-11-292022-04-05淮阴工学院 Optical power prediction method and system based on two-stage feature extraction and improved BiLSTM
CN114363923A (en)*2021-11-302022-04-15山东师范大学 A method and system for industrial Internet of Things resource allocation based on federated edge learning
CN114357676A (en)*2021-12-152022-04-15华南理工大学Aggregation frequency control method for hierarchical model training framework
CN114359963A (en)*2021-12-272022-04-15海信集团控股股份有限公司 Gesture recognition method and communication system
CN114363911A (en)*2021-12-312022-04-15哈尔滨工业大学(深圳)Wireless communication system for deploying layered federated learning and resource optimization method
CN114462573A (en)*2022-01-202022-05-10内蒙古工业大学 An efficient hierarchical parameter transmission delay optimization method for edge intelligence
CN114465900A (en)*2022-03-012022-05-10北京邮电大学Data sharing delay optimization method and device based on federal edge learning
CN114489964A (en)*2021-12-082022-05-13中国科学院信息工程研究所Vehicle-mounted network credit priority task unloading method based on federal learning
CN114650228A (en)*2022-03-182022-06-21南京邮电大学Federal learning scheduling method based on computation unloading in heterogeneous network
CN114818446A (en)*2021-12-222022-07-29安徽继远软件有限公司Power service decomposition method and system facing 5G cloud edge-end cooperation
CN114916013A (en)*2022-05-102022-08-16中南大学 Method, system and medium for edge task offloading delay optimization based on vehicle trajectory prediction
CN115080249A (en)*2022-08-222022-09-20南京可信区块链与算法经济研究院有限公司Vehicle networking multidimensional resource allocation method and system based on federal learning
CN115408151A (en)*2022-08-232022-11-29哈尔滨工业大学 A Federated Learning Training Acceleration Method
WO2023061500A1 (en)*2021-10-152023-04-20Huawei Technologies Co., Ltd.Methods and systems for updating parameters of a parameterized optimization algorithm in federated learning
CN116074085A (en)*2023-01-152023-05-05浙江工业大学 A data security protection method for an intelligent networked car machine
CN116149838A (en)*2022-09-302023-05-23中国人民解放军国防科技大学Privacy-enhanced federal learning system with cloud edge end fusion
CN116166406A (en)*2023-04-252023-05-26合肥工业大学智能制造技术研究院 Personalized edge offload scheduling method, model training method and system
WO2023134065A1 (en)*2022-01-142023-07-20平安科技(深圳)有限公司Gradient compression method and apparatus, device, and storage medium
CN116644802A (en)*2023-07-192023-08-25支付宝(杭州)信息技术有限公司Model training method and device
CN116821687A (en)*2023-07-062023-09-29河南大学 Federated deep learning wind power generation prediction method based on data privacy protection
CN117076132A (en)*2023-10-122023-11-17北京邮电大学 Resource allocation and aggregation optimization methods and devices for hierarchical federated learning systems
CN118070926A (en)*2024-04-222024-05-24东北大学Multi-task federation learning method based on client resource self-adaption
CN118504660A (en)*2024-07-182024-08-16湖南红普创新科技发展有限公司Heterogeneous data processing method and device for Internet of vehicles, computer equipment and storage medium
CN119761529A (en)*2024-04-032025-04-04湖北文理学院 Personalized federated learning method and system based on edge-end-cloud three-layer architecture

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112181971A (en)*2020-10-272021-01-05华侨大学Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112181971A (en)*2020-10-272021-01-05华侨大学Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴琪 等: "边缘学习:关键技术、应用与挑战", 《无线电通信技术》*

Cited By (71)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113467928A (en)*2021-05-202021-10-01杭州趣链科技有限公司Block chain decentralization-based federated learning member reasoning attack defense method and device
CN113467928B (en)*2021-05-202024-12-27杭州趣链科技有限公司 Method and device for defending against member reasoning attacks of federated learning based on blockchain decentralization
CN113191504B (en)*2021-05-212022-06-28电子科技大学Federated learning training acceleration method for computing resource isomerism
CN113191504A (en)*2021-05-212021-07-30电子科技大学Federated learning training acceleration method for computing resource heterogeneity
CN113435472A (en)*2021-05-242021-09-24西安电子科技大学Vehicle-mounted computing power network user demand prediction method, system, device and medium
CN113469367A (en)*2021-05-252021-10-01华为技术有限公司Method, device and system for federated learning
WO2022247683A1 (en)*2021-05-252022-12-01华为技术有限公司Federated learning method, apparatus, and system
CN113469367B (en)*2021-05-252024-05-10华为技术有限公司 A federated learning method, device and system
CN113312180A (en)*2021-06-072021-08-27北京大学Resource allocation optimization method and system based on federal learning
CN113255934A (en)*2021-06-072021-08-13大连理工大学Federal learning method and system for sensing network uncertainty in mobile edge cloud
CN113361694A (en)*2021-06-302021-09-07哈尔滨工业大学Layered federated learning method and system applying differential privacy protection
CN113361694B (en)*2021-06-302022-03-15哈尔滨工业大学 A hierarchical federated learning method and system applying differential privacy protection
CN113642700A (en)*2021-07-052021-11-12湖南师范大学 Cross-platform multimodal public opinion analysis method based on federated learning and edge computing
CN113504999A (en)*2021-08-052021-10-15重庆大学Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN113504999B (en)*2021-08-052023-07-04重庆大学Scheduling and resource allocation method for high-performance hierarchical federal edge learning
CN113610303B (en)*2021-08-092024-03-19北京邮电大学Load prediction method and system
CN113610303A (en)*2021-08-092021-11-05北京邮电大学Load prediction method and system
CN113408675A (en)*2021-08-202021-09-17深圳市沃易科技有限公司Intelligent unloading optimization method and system based on federal learning
CN113761525A (en)*2021-09-072021-12-07广东电网有限责任公司江门供电局Intelligent intrusion detection method and system based on federal learning
CN113852692B (en)*2021-09-242024-01-30中国移动通信集团陕西有限公司 Service determination methods, devices, equipment and computer storage media
CN113852692A (en)*2021-09-242021-12-28中国移动通信集团陕西有限公司Service determination method, device, equipment and computer storage medium
CN114118437A (en)*2021-09-302022-03-01电子科技大学 A model update synchronization method for distributed machine learning in micro-cloud
CN114118437B (en)*2021-09-302023-04-18电子科技大学Model updating synchronization method for distributed machine learning in micro cloud
WO2023061500A1 (en)*2021-10-152023-04-20Huawei Technologies Co., Ltd.Methods and systems for updating parameters of a parameterized optimization algorithm in federated learning
CN113839838B (en)*2021-10-202023-10-20西安电子科技大学Business type identification method based on cloud edge cooperation and federal learning
CN113839838A (en)*2021-10-202021-12-24西安电子科技大学Business type identification method for federal learning based on cloud edge cooperation
CN114116198A (en)*2021-10-212022-03-01西安电子科技大学 Asynchronous federated learning method, system, device and terminal for moving vehicles
CN113971090A (en)*2021-10-212022-01-25中国人民解放军国防科技大学 Hierarchical federated learning method and device for distributed deep neural network
CN114116198B (en)*2021-10-212024-07-19西安电子科技大学Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN113971090B (en)*2021-10-212022-09-13中国人民解放军国防科技大学Layered federal learning method and device of distributed deep neural network
CN114040425A (en)*2021-11-172022-02-11中国电信集团系统集成有限责任公司Resource allocation method based on global resource availability optimization
CN114040425B (en)*2021-11-172024-03-15中电信数智科技有限公司Resource allocation method based on global resource utility rate optimization
CN114125861A (en)*2021-11-262022-03-01北京邮电大学 A wireless federated learning method and device
CN114143212A (en)*2021-11-262022-03-04天津大学Social learning method for smart city
CN114282646A (en)*2021-11-292022-04-05淮阴工学院 Optical power prediction method and system based on two-stage feature extraction and improved BiLSTM
CN114282646B (en)*2021-11-292023-08-25淮阴工学院 Optical power prediction method and system based on two-stage feature extraction and improved BiLSTM
CN114363923B (en)*2021-11-302024-03-26山东师范大学Industrial Internet of things resource allocation method and system based on federal edge learning
CN114363923A (en)*2021-11-302022-04-15山东师范大学 A method and system for industrial Internet of Things resource allocation based on federated edge learning
CN113873047B (en)*2021-12-032022-02-15江苏电力信息技术有限公司Cooperative computing method for streaming data
CN113873047A (en)*2021-12-032021-12-31江苏电力信息技术有限公司Cooperative computing method for streaming data
CN114489964A (en)*2021-12-082022-05-13中国科学院信息工程研究所Vehicle-mounted network credit priority task unloading method based on federal learning
CN114357676B (en)*2021-12-152024-04-02华南理工大学Aggregation frequency control method for hierarchical model training framework
CN114357676A (en)*2021-12-152022-04-15华南理工大学Aggregation frequency control method for hierarchical model training framework
CN114818446A (en)*2021-12-222022-07-29安徽继远软件有限公司Power service decomposition method and system facing 5G cloud edge-end cooperation
CN114359963A (en)*2021-12-272022-04-15海信集团控股股份有限公司 Gesture recognition method and communication system
CN114363911B (en)*2021-12-312023-10-17哈尔滨工业大学(深圳) A wireless communication system and resource optimization method for deploying hierarchical federated learning
CN114363911A (en)*2021-12-312022-04-15哈尔滨工业大学(深圳)Wireless communication system for deploying layered federated learning and resource optimization method
WO2023134065A1 (en)*2022-01-142023-07-20平安科技(深圳)有限公司Gradient compression method and apparatus, device, and storage medium
CN114462573A (en)*2022-01-202022-05-10内蒙古工业大学 An efficient hierarchical parameter transmission delay optimization method for edge intelligence
CN114462573B (en)*2022-01-202023-11-14内蒙古工业大学Edge intelligence-oriented efficient hierarchical parameter transmission delay optimization method
CN114465900A (en)*2022-03-012022-05-10北京邮电大学Data sharing delay optimization method and device based on federal edge learning
CN114465900B (en)*2022-03-012023-03-21北京邮电大学 Data sharing delay optimization method and device based on federated edge learning
CN114650228A (en)*2022-03-182022-06-21南京邮电大学Federal learning scheduling method based on computation unloading in heterogeneous network
CN114650228B (en)*2022-03-182023-07-25南京邮电大学Federal learning scheduling method based on calculation unloading in heterogeneous network
CN114916013B (en)*2022-05-102024-04-16中南大学Edge task unloading delay optimization method, system and medium based on vehicle track prediction
CN114916013A (en)*2022-05-102022-08-16中南大学 Method, system and medium for edge task offloading delay optimization based on vehicle trajectory prediction
CN115080249B (en)*2022-08-222022-12-16南京可信区块链与算法经济研究院有限公司Vehicle networking multidimensional resource allocation method and system based on federal learning
CN115080249A (en)*2022-08-222022-09-20南京可信区块链与算法经济研究院有限公司Vehicle networking multidimensional resource allocation method and system based on federal learning
CN115408151A (en)*2022-08-232022-11-29哈尔滨工业大学 A Federated Learning Training Acceleration Method
CN116149838A (en)*2022-09-302023-05-23中国人民解放军国防科技大学Privacy-enhanced federal learning system with cloud edge end fusion
CN116074085A (en)*2023-01-152023-05-05浙江工业大学 A data security protection method for an intelligent networked car machine
CN116166406B (en)*2023-04-252023-06-30合肥工业大学智能制造技术研究院Personalized edge unloading scheduling method, model training method and system
CN116166406A (en)*2023-04-252023-05-26合肥工业大学智能制造技术研究院 Personalized edge offload scheduling method, model training method and system
CN116821687A (en)*2023-07-062023-09-29河南大学 Federated deep learning wind power generation prediction method based on data privacy protection
CN116644802A (en)*2023-07-192023-08-25支付宝(杭州)信息技术有限公司Model training method and device
CN117076132B (en)*2023-10-122024-01-05北京邮电大学Resource allocation and aggregation optimization method and device for hierarchical federal learning system
CN117076132A (en)*2023-10-122023-11-17北京邮电大学 Resource allocation and aggregation optimization methods and devices for hierarchical federated learning systems
CN119761529A (en)*2024-04-032025-04-04湖北文理学院 Personalized federated learning method and system based on edge-end-cloud three-layer architecture
CN118070926A (en)*2024-04-222024-05-24东北大学Multi-task federation learning method based on client resource self-adaption
CN118070926B (en)*2024-04-222024-08-09东北大学Multi-task federation learning method based on client resource self-adaption
CN118504660A (en)*2024-07-182024-08-16湖南红普创新科技发展有限公司Heterogeneous data processing method and device for Internet of vehicles, computer equipment and storage medium

Similar Documents

PublicationPublication DateTitle
CN112817653A (en)Cloud-side-based federated learning calculation unloading computing system and method
CN113222179B (en) A Federated Learning Model Compression Method Based on Model Sparsification and Weight Quantization
Cui et al.Optimal rate adaption in federated learning with compressed communications
CN109947545B (en) A Decision Method for Task Offloading and Migration Based on User Mobility
Wang et al.A novel reputation-aware client selection scheme for federated learning within mobile environments
Elbir et al.A hybrid architecture for federated and centralized learning
CN114650228B (en)Federal learning scheduling method based on calculation unloading in heterogeneous network
CN113595993B (en) A joint learning method for vehicle sensing equipment based on model structure optimization under edge computing
CN112511336B (en)Online service placement method in edge computing system
Chen et al.Communication-efficient design for quantized decentralized federated learning
Cui et al.A fast blockchain-based federated learning framework with compressed communications
Li et al.Anycostfl: Efficient on-demand federated learning over heterogeneous edge devices
CN115034390A (en)Deep learning model reasoning acceleration method based on cloud edge-side cooperation
CN115696296B (en) An Active Edge Caching Method Based on Community Discovery and Weighted Federated Learning
CN111158912A (en)Task unloading decision method based on deep learning in cloud and mist collaborative computing environment
CN118612754A (en) Three-in-one terminal control system and method capable of intelligent networking
CN112836822A (en) Federated learning strategy optimization method and device based on breadth learning
Li et al.CBFL: A communication-efficient federated learning framework from data redundancy perspective
Feng et al.Joint detection and computation offloading with age of information in mobile edge networks
Zhang et al.Fundamental limits of communication efficiency for model aggregation in distributed learning: A rate-distortion approach
Zhu et al.Dynamic task offloading in power grid internet of things: A fast-convergent federated learning approach
CN118450438A (en) Low-energy computing migration method for delay-sensitive tasks in edge-cloud hybrid system
CN112667406A (en)Task unloading and data caching method in cloud edge fusion heterogeneous network
Chen et al.Semi-asynchronous hierarchical federated learning over mobile edge networks
Zhu et al.Efficient model compression for hierarchical federated learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20210518


[8]ページ先頭

©2009-2025 Movatter.jp