Movatterモバイル変換


[0]ホーム

URL:


CN114742143A - Safe training model construction method, device and system based on federal learning - Google Patents

Safe training model construction method, device and system based on federal learning
Download PDF

Info

Publication number
CN114742143A
CN114742143ACN202210340718.XACN202210340718ACN114742143ACN 114742143 ACN114742143 ACN 114742143ACN 202210340718 ACN202210340718 ACN 202210340718ACN 114742143 ACN114742143 ACN 114742143A
Authority
CN
China
Prior art keywords
node
state
training data
difference
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210340718.XA
Other languages
Chinese (zh)
Inventor
黄秀丽
石聪聪
费稼轩
于鹏飞
高先周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Energy Internet Research Institute Co ltd Nanjing Branch
State Grid Jiangsu Electric Power Co Ltd
State Grid Corp of China SGCC
Big Data Center of State Grid Corp of China
Original Assignee
Global Energy Internet Research Institute Co ltd Nanjing Branch
State Grid Jiangsu Electric Power Co Ltd
State Grid Corp of China SGCC
Big Data Center of State Grid Corp of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Global Energy Internet Research Institute Co ltd Nanjing Branch, State Grid Jiangsu Electric Power Co Ltd, State Grid Corp of China SGCC, Big Data Center of State Grid Corp of ChinafiledCriticalGlobal Energy Internet Research Institute Co ltd Nanjing Branch
Priority to CN202210340718.XApriorityCriticalpatent/CN114742143A/en
Publication of CN114742143ApublicationCriticalpatent/CN114742143A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明实施例涉及计算机技术领域,尤其涉及一种基于联邦学习的安全训练模型构建的方法、装置、系统及存储介质。该方法包括:获取各个节点上传的经过差分隐私处理之后的当前训练数据,获取各个节点的对应的至少一个历史训练数据和历史降维差值,基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态,利用预测和真的结果进行对比判定当前的节点是否发生故障,根据状态对各个节点进行筛选,确定筛选结果,基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点。这样就能剔除掉联邦学习过程中出现的异常的点,极大的提高了工作效率。

Figure 202210340718

Embodiments of the present invention relate to the field of computer technology, and in particular, to a method, device, system and storage medium for building a security training model based on federated learning. The method includes: acquiring current training data uploaded by each node after differential privacy processing, acquiring at least one historical training data and historical dimension reduction difference corresponding to each node, and based on the current training data, historical training data and historical data of each node Reduce the dimensional difference, determine the status of each node, compare the predicted and actual results to determine whether the current node is faulty, screen each node according to the status, determine the screening results, and based on the screening results, carry out the pre-configured initial model. Train, determine the target model and distribute to each node. In this way, abnormal points in the federated learning process can be eliminated, which greatly improves work efficiency.

Figure 202210340718

Description

Translated fromChinese
基于联邦学习的安全训练模型构建方法、装置、系统Method, device and system for constructing security training model based on federated learning

技术领域technical field

本发明实施例涉及计算机技术领域,尤其涉及一种基于联邦学习的安全训练模型构建的方法、装置、系统及存储介质。Embodiments of the present invention relate to the field of computer technology, and in particular, to a method, device, system and storage medium for building a security training model based on federated learning.

背景技术Background technique

联邦学习(Federated Learning)是一种分布式机器学习框架,其支持多个节点在不交换本地数据的情况下协作训练模型。由于联邦学习目前还处在发展初期,还面临着众多的安全问题急需解决。在电力联邦学习系统中,各电力节点提供的数据质量可能参差不齐,有些节点可能因为遭受外部攻击或者内部故障导致向聚合服务器上传错误参数或停止向聚合服务器发送参数等等,导致全局模型性能下降。Federated Learning is a distributed machine learning framework that enables multiple nodes to collaboratively train models without exchanging local data. Since federated learning is still in the early stage of development, there are still many security problems that need to be solved urgently. In the power federated learning system, the quality of data provided by each power node may be uneven, and some nodes may upload wrong parameters to the aggregation server or stop sending parameters to the aggregation server due to external attacks or internal failures, resulting in global model performance. decline.

因此,需要一种异常节点的检测方法,解决上述问题。Therefore, a detection method for abnormal nodes is required to solve the above problems.

发明内容SUMMARY OF THE INVENTION

鉴于此,为解决现有技术中上述技术问题,本发明实施例提供基于联邦学习的安全训练模型构建的方法、装置、系统及存储介质。In view of this, in order to solve the above technical problems in the prior art, the embodiments of the present invention provide a method, device, system and storage medium for constructing a security training model based on federated learning.

第一方面,本发明实施例提供一种基于联邦学习的安全训练模型构建的方法,该方法包括:In a first aspect, an embodiment of the present invention provides a method for building a security training model based on federated learning, the method comprising:

获取各个节点上传的经过差分隐私处理之后的当前训练数据,其中,当前训练数据是各个节点对预配置在各个节点中的初始模型进行训练之后得到的;Obtain the current training data uploaded by each node after differential privacy processing, wherein the current training data is obtained after each node trains the initial model preconfigured in each node;

获取各个节点的对应的至少一个历史训练数据和历史降维差值;Obtain at least one historical training data and historical dimensionality reduction difference corresponding to each node;

基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态;Determine the status of each node based on the current training data, historical training data and historical dimensionality reduction difference of each node;

根据状态对各个节点进行筛选,确定筛选结果;Screen each node according to the status to determine the screening result;

基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点。Based on the screening results, the preconfigured initial model is trained, and the target model is determined and distributed to each node.

在一个可能的实施方式中,基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态,包括:In a possible implementation manner, the state of each node is determined based on the current training data, historical training data and historical dimensionality reduction difference of each node, including:

获取全部节点中第i个节点的历史训练数据;Obtain the historical training data of the i-th node in all nodes;

确定历史训练数据和当前训练数据的真实差值;Determine the true difference between historical training data and current training data;

将真实差值送入至预设的降维模型,确定降维之后的真实降维差值;Send the real difference to the preset dimensionality reduction model to determine the real dimensionality reduction difference after dimensionality reduction;

将第i个节点对应的至少一个历史降维差值送入预设的预测模型,确定预测降维差值,其中,i为正整数;Send at least one historical dimension reduction difference corresponding to the ith node into the preset prediction model, and determine the prediction dimension reduction difference, where i is a positive integer;

将预测降维差值送入至预设的增维模型,确定对应的预测差值;The predicted dimensionality reduction difference is sent to the preset dimensionality increase model to determine the corresponding predicted difference;

基于真实差值,预测差值,真实降维差值,预测降维差值,确定各个节点的状态。Based on the real difference, the predicted difference, the real dimensionality reduction difference, and the predicted dimensionality reduction difference, the status of each node is determined.

在一个可能的实施方式中,基于真实差值,预测差值,真实降维差值,预测降维差值,确定各个节点的状态,包括:In a possible implementation manner, the state of each node is determined based on the actual difference, the predicted difference, the actual dimensionality reduction difference, and the predicted dimensionality reduction difference, including:

判断真实差值和预测差值之间的第一欧式距离与预设的第一阈值之间的关系,确定第一判断结果;Judging the relationship between the first Euclidean distance between the real difference and the predicted difference and the preset first threshold, and determining the first judgment result;

判断真实降维差值和预测降维差值之间的第二欧式距离与预设的第二阈值之间的关系,确定第二判断结果;Judging the relationship between the second Euclidean distance between the real dimensionality reduction difference and the predicted dimensionality reduction difference and the preset second threshold, and determining the second judgment result;

基于第一判断结果和第二判断结果,确定第i个节点的状态。Based on the first judgment result and the second judgment result, the state of the ith node is determined.

在一个可能的实施方式中,基于第一判断结果和第二判断结果,确定第i个节点的状态,包括:In a possible implementation manner, determining the state of the ith node based on the first judgment result and the second judgment result, including:

当第一欧式距离小于第一阈值,且第二欧式距离小于第二阈值时,确定第i个节点状态为第一状态;When the first Euclidean distance is less than the first threshold, and the second Euclidean distance is less than the second threshold, it is determined that the i-th node state is the first state;

当第一欧式距离大于第一阈值,且第二欧式距离小于第二阈值时,或第一欧式距离小于第一阈值,且第二欧式距离大于第二阈值时,确定第i个节点状态为第二状态;When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is less than the second threshold, or when the first Euclidean distance is less than the first threshold and the second Euclidean distance is greater than the second threshold, the state of the ith node is determined to be the ith node. two states;

当第一欧式距离大于第一阈值,且第二欧式距离大于第二阈值时,确定第i个节点状态为第三状态。When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is greater than the second threshold, it is determined that the state of the ith node is the third state.

在一个可能的实施方式中,基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点,包括:In a possible implementation, based on the screening results, a pre-configured initial model is trained, and a target model is determined and distributed to each node, including:

筛选出各个节点状态为第一状态的节点和各个节点状态为第二状态的节点;Screening out the nodes whose state is the first state and the nodes whose state is the second state;

基于各个节点状态为第一状态的节点的训练数据对预配置的模型进行训练,确定目标模型;The preconfigured model is trained based on the training data of the nodes whose state is the first state, and the target model is determined;

将目标模型分别发送至各个节点状态为第一状态的节点和各个节点状态为第二状态的节点。The target model is respectively sent to each node whose state is the first state and each node whose state is the second state.

在一个可能的实施方式中,还包括:In a possible implementation, it also includes:

筛选出各个节点状态为第三状态的节点;Filter out the nodes whose state is the third state of each node;

向节点状态为第三状态的节点发送错误数据。Error data is sent to the node whose node state is the third state.

在一个可能的实施方式中,获取各个节点上传的经过差分隐私处理之后的当前训练数据,包括:In a possible implementation manner, acquiring the current training data uploaded by each node after differential privacy processing includes:

将初始模型分别发送给各个节点,以使节点对初始模型进行训练,并确定训练过程中产生的梯度数据;Send the initial model to each node, so that the node can train the initial model and determine the gradient data generated during the training process;

接收各个节点上传的经过差分隐私处理之后的隐私梯度数据,将隐私梯度数据作为当前训练数据。Receive the privacy gradient data uploaded by each node after differential privacy processing, and use the privacy gradient data as the current training data.

第二方面,本发明实施例提供一种基于联邦学习的安全训练模型构建装置,包括:In a second aspect, an embodiment of the present invention provides an apparatus for constructing a security training model based on federated learning, including:

获取模块,用于获取各个节点上传的经过差分隐私处理之后的当前训练数据,其中,当前训练数据是各个节点对预配置在各个节点中的初始模型进行训练之后得到的;获取各个节点的对应的至少一个历史训练数据和历史降维差值;The acquisition module is used to acquire the current training data uploaded by each node after differential privacy processing, where the current training data is obtained by each node after training the initial model preconfigured in each node; At least one difference between historical training data and historical dimensionality reduction;

处理模块,用于基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态;根据状态对各个节点进行筛选,确定筛选结果;The processing module is used to determine the status of each node based on the current training data, historical training data and historical dimensionality reduction difference of each node; screen each node according to the status to determine the screening result;

确定模块,用于基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点。The determination module is used for training the preconfigured initial model based on the screening result, determining the target model and distributing it to each node.

第三方面,本申请提供了一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器执行如第一方面中任一的方法的步骤。In a third aspect, the present application provides an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor. The processors execute to cause at least one processor to execute the steps of the method as in any of the first aspects.

第四方面,本申请提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现如第一方面中任一项的方法的步骤。In a fourth aspect, the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the method according to any one of the first aspects.

本发明提供了一种基于联邦学习的安全训练模型构建方法,获取各个节点上传的经过差分隐私处理之后的当前训练数据,通过差分隐私处理,极大的保护了各个节点之间的隐私性,获取各个节点的对应的至少一个历史训练数据和历史降维差值,基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态,利用预测和真的结果进行对比判定当前的节点是否发生故障,根据状态对各个节点进行筛选,确定筛选结果,基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点。这样就能剔除掉联邦学习过程中出现的异常的点,这些点有可能已经遭到了攻击,剔除掉这些异常的节点可以极大的提高了联邦学习的安全性,极大的提高了工作效率。The present invention provides a method for constructing a security training model based on federated learning, which obtains the current training data uploaded by each node after differential privacy processing, and greatly protects the privacy between each node through differential privacy processing. The corresponding at least one historical training data and the historical dimensionality reduction difference of each node, based on the current training data, historical training data and historical dimensionality reduction difference of each node, determine the status of each node, and use the predicted and true results to compare and determine Whether the current node fails, screen each node according to the status, determine the screening result, train the preconfigured initial model based on the screening result, determine the target model and distribute it to each node. In this way, abnormal points in the federated learning process can be eliminated. These points may have been attacked. Eliminating these abnormal nodes can greatly improve the security of federated learning and greatly improve work efficiency.

附图说明Description of drawings

图1为本发明实施例提供的基于联邦学习的安全训练模型构建的方法流程示意图;1 is a schematic flowchart of a method for constructing a federated learning-based security training model provided by an embodiment of the present invention;

图2为本发明实施例提供的确定各个节点状态的方法流程示意图;FIG. 2 is a schematic flowchart of a method for determining the status of each node provided by an embodiment of the present invention;

图3为本发明实施例提供的确定各个节点状态的方法流程示意图;3 is a schematic flowchart of a method for determining the status of each node provided by an embodiment of the present invention;

图4为本发明实施例提供的节点状态筛选示意图;FIG. 4 is a schematic diagram of node status screening according to an embodiment of the present invention;

图5为本发明实施例提供的基于联邦学习的安全训练模型构建的装置结构示意图;5 is a schematic structural diagram of an apparatus constructed by a federated learning-based security training model provided by an embodiment of the present invention;

图6为本发明实施例提供基于联邦学习的安全训练模型构建的系统结构示意图。FIG. 6 is a schematic structural diagram of a system provided by a federated learning-based security training model according to an embodiment of the present invention.

具体实施方式Detailed ways

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments.

基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

为便于对本发明实施例的理解,下面将结合附图以具体实施例做进一步的解释说明,实施例并不构成对本发明实施例的限定。In order to facilitate the understanding of the embodiments of the present invention, further explanation will be given below with specific embodiments in conjunction with the accompanying drawings, and the embodiments do not constitute limitations to the embodiments of the present invention.

图1为本发明实施例提供的基于联邦学习的安全训练模型构建的方法流程示意图,该方法步骤执行过程,具体可以参见图1所示,该方法包括:FIG. 1 is a schematic flowchart of a method for constructing a federated learning-based security training model provided by an embodiment of the present invention. The execution process of the method steps can be referred to as shown in FIG. 1 for details. The method includes:

步骤110,获取各个节点上传的经过差分隐私处理之后的当前训练数据。Step 110: Obtain the current training data uploaded by each node after differential privacy processing.

具体的,其中,当前训练数据是各个节点对预配置在各个节点中的初始模型进行训练之后得到的。Specifically, the current training data is obtained after each node performs training on the initial model preconfigured in each node.

需要说明的是,在实际应用中,可以采用各种数据作为例如:模型梯度,模型损失函数结果等等,在此不做限定,以实际应用为准。It should be noted that in practical applications, various data can be used as, for example, model gradients, model loss function results, etc., which are not limited here, and are subject to practical applications.

可选的,在一种示例中,将初始模型分别发送给各个节点,以使节点对初始模型进行训练,并确定训练过程中产生的梯度数据,接收各个节点上传的经过差分隐私处理之后的隐私梯度数据,将隐私梯度数据作为当前训练数据。Optionally, in an example, the initial model is sent to each node, so that the node can train the initial model, determine the gradient data generated during the training process, and receive the differential privacy processed privacy uploaded by each node. Gradient data, using private gradient data as the current training data.

这些当前训练数据,就是在之后步骤中用以判断该节点是否正常的依据。These current training data are the basis for judging whether the node is normal in subsequent steps.

步骤120,获取各个节点的对应的至少一个历史训练数据和历史降维差值。Step 120: Acquire at least one historical training data corresponding to each node and a historical dimensionality reduction difference.

具体的,历史训练数据和历史降维差值可以直接从数据库中进行获取。Specifically, the historical training data and the historical dimensionality reduction difference can be obtained directly from the database.

进一步的,需要说明的是,在一种实施方式中,当各个子节点初次将自身的当前训练数据上传至聚合服务器时,是没有所谓的历史数据,也就无从谈起后续步骤利用历史训练数据和历史降维差值进行安全性检测了,因此在这种情况下,默认初始条件下,即在各个子节点初始上传的数据都是正确的,Further, it should be noted that, in an embodiment, when each sub-node uploads its own current training data to the aggregation server for the first time, there is no so-called historical data, and there is no way to use historical training data in subsequent steps. The security detection is carried out with the historical dimensionality reduction difference, so in this case, under the default initial conditions, that is, the data initially uploaded at each child node is correct,

在初始状态下,各个子节点状态也都是正常的。In the initial state, the state of each child node is also normal.

当历史训练数据和历史降维差值积攒的足够多之后,开始利用这些数据执行之后的步骤。When enough historical training data and historical dimensionality reduction differences are accumulated, the next steps are performed using these data.

步骤130,基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态。Step 130: Determine the status of each node based on the current training data, historical training data and historical dimensionality reduction difference of each node.

具体的,由于历史数据一定是正确的,根据历史数据进行预测获得预测结果,应当与当前的实际结果相差不大,基于这一构思,并结合上述步骤获取的数据,有如下图2所示的步骤,来确定各个节点状态:Specifically, since the historical data must be correct, the prediction results obtained by making predictions based on the historical data should not be too different from the current actual results. Based on this idea and the data obtained in the above steps, the following figure is shown in Figure 2. Steps to determine the status of each node:

步骤210,获取全部节点中第i个节点的历史训练数据。Step 210: Obtain the historical training data of the ith node in all nodes.

具体的,历史训练数据可以直接从数据库中进行获取。Specifically, the historical training data can be obtained directly from the database.

进一步的,获得历史训练数据用以计算步骤步骤220中的真实差值,以及步骤240中的预测。Further, historical training data is obtained to calculate the true difference instep 220 and the prediction instep 240 .

步骤220,确定历史训练数据和当前训练数据的真实差值。Step 220: Determine the real difference between the historical training data and the current training data.

具体的,在本步骤中,所用到的历史训练数据仅为当前训练数据之前一次的训练数据。Specifically, in this step, the historical training data used is only the training data before the current training data.

在一个实施例中,假设联邦学习进行到了第8轮,那么将当前训练数据记为g8,在步骤220中所引用的历史训练数据,仅需要是g7即可。In one embodiment, assuming that the federated learning has reached the eighth round, the current training data is denoted as g8 , and the historical training data referenced instep 220 only needs to be g7 .

进一步的,在获得历史训练数据之后,可以根据如下公式计算真实差值DjFurther, after obtaining the historical training data, the real difference value Dj can be calculated according to the following formula:

Dj=gj-gj-1Dj =gj -gj-1

其中,gj为第j轮的训练数据。Among them, gj is the training data of the jth round.

步骤230,将真实差值送入至预设的降维模型,确定降维之后的真实降维差值。Instep 230, the real difference value is sent to a preset dimension reduction model, and the real dimension reduction difference value after dimension reduction is determined.

具体的,降维模型包括但不限于以下模型,例如:自编码器,主成分分析模型等等,在此不做限定。Specifically, the dimensionality reduction model includes but is not limited to the following models, such as: autoencoder, principal component analysis model, etc., which are not limited here.

在一个可选的实施例中,假设存在一个真实差值Dj,并假设降维模型是自编码器,则将真是插值Dj送入至自编码器,生成降维之后的真实降维差值dj。真实降维差值也将用于后续的计算中。In an optional embodiment, assuming that there is a real difference value Dj , and assuming that the dimensionality reduction model is an autoencoder, the real interpolation value Dj is sent to the autoencoder to generate the real dimensionality reduction difference after dimension reduction. value dj . The true dimensionality reduction difference will also be used in subsequent calculations.

步骤240,将第i个节点对应的至少一个历史降维差值送入预设的预测模型,确定预测降维差值。Step 240: Send at least one historical dimensionality reduction difference corresponding to the ith node into a preset prediction model to determine the predicted dimensionality reduction difference.

具体的,在一个可选的实施例中,预测模型可以选择长短期记忆(Long Short-Term Memory,LSTM)模型,在使用LSTM时,需要输入历史降维差值,用以生成当前预测降维差值。Specifically, in an optional embodiment, a long short-term memory (Long Short-Term Memory, LSTM) model can be selected as the prediction model. When using LSTM, the historical dimension reduction difference needs to be input to generate the current prediction dimension reduction. difference.

例如,假设存在历史降维差值集合[dj-1,dj-2,dj-3],根据集合中包含的三个历史降维差值,LSTM模型会输出一个预测第j轮联邦学习的预测降维差值,记作d′jFor example, assuming that there is a set of historical dimensionality reduction differences [dj-1 , dj-2 , dj-3 ], according to the three historical dimensionality reduction differences contained in the set, the LSTM model will output a prediction of the jth round federation The learned predicted dimensionality reduction difference, denoted as d′j .

需要说明的是,在本实施例中,将LSTM模型作为预测模型,实际应用中并不仅限于此模型,可以是任意预测模型,在此不做限定。It should be noted that, in this embodiment, the LSTM model is used as the prediction model, and the actual application is not limited to this model, and can be any prediction model, which is not limited here.

步骤250,将预测降维差值送入至预设的增维模型,确定对应的预测差值。Instep 250, the predicted dimensionality reduction difference is sent to a preset dimensionality increase model, and the corresponding predicted difference is determined.

具体的,在通过步骤240获得了预测降维差值之后,还需要通过增维模型获得预测的差值。Specifically, after the predicted dimensionality reduction difference is obtained throughstep 240, the predicted difference also needs to be obtained through the dimensional increase model.

在一个可选的示例中,可以将解码器作为增维模型,可以将预测降维差值d′j输入至解码器中,获得增维后的预测差值D′jIn an optional example, the decoder can be used as a dimension-increasing model, and the predicted dimensionality-reduced difference value d′j can be input into the decoder to obtain the dimensionally-increased predicted difference value D′j .

步骤260,基于真实差值,预测差值,真实降维差值,预测降维差值,确定各个节点的状态。Step 260: Determine the state of each node based on the actual difference, the predicted difference, the actual dimensionality reduction difference, and the predicted dimensionality reduction difference.

具体的,通过步骤210-步骤250的处理,可以获得真实差值Dj,预测差值D′j,真实降维差值dj,以及预测降维差值d′j,即可确定各个节点的状态。其中,具体的节点状态判断过程,则可以根据如图3所示的步骤进行判断。Specifically, through the processing fromsteps 210 to 250, the real difference Dj , the predicted difference D'j , the real dimensionality reduction difference dj , and the predicted dimensional reduction difference d'j can be obtained, and each node can be determined. status. The specific node state determination process may be determined according to the steps shown in FIG. 3 .

步骤310,判断真实差值和预测差值之间的第一欧式距离与预设的第一阈值之间的关系,确定第一判断结果。Step 310: Determine the relationship between the first Euclidean distance between the actual difference and the predicted difference and a preset first threshold, and determine a first judgment result.

具体的,可以通过如下公式计算真实差值和预测差值之间的第一欧式距离:Specifically, the first Euclidean distance between the actual difference and the predicted difference can be calculated by the following formula:

Figure BDA0003575909360000081
Figure BDA0003575909360000081

其中,i为第i个节点,

Figure BDA0003575909360000082
第i个节点,第j轮的真实差值,
Figure BDA0003575909360000083
第i个节点,第j轮的预测差值。Among them, i is the ith node,
Figure BDA0003575909360000082
The ith node, the true difference of the jth round,
Figure BDA0003575909360000083
The i-th node, the prediction difference of the j-th round.

进一步的,第一欧氏距离和预设的第一阈值之间的大小关系记录为第一判断结果,用于后续的状态判断。Further, the magnitude relationship between the first Euclidean distance and the preset first threshold is recorded as the first judgment result, which is used for subsequent state judgment.

步骤320,判断真实降维差值和预测降维差值之间的第二欧式距离与预设的第二阈值之间的关系,确定第二判断结果。Step 320, judging the relationship between the second Euclidean distance between the actual dimensionality reduction difference and the predicted dimensionality reduction difference and a preset second threshold, and determining a second judgment result.

具体的,可以通过如下公式计算真实差值和预测差值之间的第二欧式距离:Specifically, the second Euclidean distance between the actual difference and the predicted difference can be calculated by the following formula:

Figure BDA0003575909360000084
Figure BDA0003575909360000084

其中,i为第i个节点,

Figure BDA0003575909360000085
第i个节点,第j轮的真实降维差值,
Figure BDA0003575909360000086
第i个节点,第j轮的预测降维差值。Among them, i is the ith node,
Figure BDA0003575909360000085
The ith node, the true dimensionality reduction difference of the jth round,
Figure BDA0003575909360000086
The i-th node, the predicted dimensionality reduction difference of the j-th round.

进一步的,第二欧氏距离和预设的第二阈值之间的大小关系记录为第二判断结果,用于后续的状态判断。Further, the magnitude relationship between the second Euclidean distance and the preset second threshold is recorded as the second judgment result, which is used for subsequent state judgment.

步骤330,基于第一判断结果和第二判断结果,确定第i个节点的状态。Step 330: Determine the state of the i-th node based on the first judgment result and the second judgment result.

具体的,在获得上述的步骤310和320后,会出现如下三种情况:Specifically, after obtaining theabove steps 310 and 320, the following three situations will occur:

(1)当第一欧式距离小于第一阈值,且第二欧式距离小于第二阈值时,确定第i个节点状态为第一状态。(1) When the first Euclidean distance is less than the first threshold, and the second Euclidean distance is less than the second threshold, determine that the state of the ith node is the first state.

(2)当第一欧式距离大于第一阈值,且第二欧式距离小于第二阈值时,或第一欧式距离小于第一阈值,且第二欧式距离大于第二阈值时,确定第i个节点状态为第二状态。(2) When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is less than the second threshold, or when the first Euclidean distance is less than the first threshold and the second Euclidean distance is greater than the second threshold, determine the ith node The state is the second state.

(3)当第一欧式距离大于第一阈值,且第二欧式距离大于第二阈值时,确定第i个节点状态为第三状态。(3) When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is greater than the second threshold, determine that the state of the ith node is the third state.

步骤140,根据状态对各个节点进行筛选,确定筛选结果。Step 140: Screen each node according to the status to determine the screening result.

具体的,根据上述步骤130中记载的方法,对每一个节点的状态进行判断,并根据节点状态进行筛选归类。Specifically, according to the method described in theabove step 130, the state of each node is judged, and screening and classification are performed according to the node state.

在一个可选的实施例中,参阅图4所示,假设存在有5个节点,分别为A、B、C、D、E,其中,A为第一状态,B为第三状态,C为第一状态,D为第二状态,E为第三状态,对上述节点进行筛选归类后,A,C为一类,D为一类,B、E为一类。In an optional embodiment, referring to FIG. 4 , it is assumed that there are five nodes, namely A, B, C, D, and E, where A is the first state, B is the third state, and C is the The first state, D is the second state, and E is the third state. After screening and classifying the above nodes, A and C are one class, D is one class, and B and E are one class.

步骤150,基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点。Step 150 , based on the screening result, train the preconfigured initial model, determine the target model, and distribute it to each node.

具体的,根据步骤140中的筛选结果,基于各个节点状态为第一状态的节点的训练数据对预配置的模型进行训练,确定目标模型,显然,对于节点状态为第一状态的节点,聚合服务器仅采用该节点发送的数据进行训练。Specifically, according to the screening result instep 140, the pre-configured model is trained based on the training data of the nodes whose node state is the first state, and the target model is determined. Obviously, for the node whose node state is the first state, the aggregation server Only the data sent by this node is used for training.

进一步的,将目标模型分别发送至各个节点状态为第一状态的节点和各个节点状态为第二状态的节点。显然,对于节点状态为第一状态的节点,聚合服务器即接收数据,也向他发送数据,而对于处于第二状态的节点,聚合服务器只向他发送数据,并不接收他发送的数据。Further, the target model is sent to each node whose state is the first state and each node whose state is the second state. Obviously, for a node whose node state is in the first state, the aggregation server not only receives data, but also sends data to him, while for a node in the second state, the aggregation server only sends data to him, and does not receive the data sent by him.

进一步的,筛选出各个节点状态为第三状态的节点,聚合服务器认定这些节点已经不安全了,会将这些节点拉入黑名单,拒绝采用该节点发送的数据,向节点状态为第三状态的节点发送错误数据。Further, the nodes whose status is the third state are screened out. The aggregation server determines that these nodes are no longer safe, and will pull these nodes into the blacklist, refuse to use the data sent by the node, and send the data to the nodes whose node status is the third state. Node sends wrong data.

需要说明的是,在拉入黑名单之后,聚合服务器除了可以向节点状态为第三状态的节点发送错误数据也可以关闭于节点状态为第三状态的节点的通道,拒绝和节点状态为第三状态的节点进行互动。It should be noted that, after pulling into the blacklist, the aggregation server can not only send error data to the node whose node state is the third state, but also close the channel of the node whose node state is the third state, reject and the node whose state is the third state. state of the node to interact with.

本发明提供了一种基于联邦学习的安全训练模型构建方法,获取各个节点上传的经过差分隐私处理之后的当前训练数据,通过差分隐私处理,极大的保护了各个节点之间的隐私性,获取各个节点的对应的至少一个历史训练数据和历史降维差值,基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态,利用预测和真的结果进行对比判定当前的节点是否发生故障,根据状态对各个节点进行筛选,确定筛选结果,基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点。这样就能剔除掉联邦学习过程中出现的异常的点,这些点有可能已经遭到了攻击,剔除掉这些异常的节点可以极大的提高了联邦学习的安全性,极大的提高了工作效率。The present invention provides a method for constructing a security training model based on federated learning, which obtains the current training data uploaded by each node after differential privacy processing, and greatly protects the privacy between each node through differential privacy processing. The corresponding at least one historical training data and the historical dimensionality reduction difference of each node, based on the current training data, historical training data and historical dimensionality reduction difference of each node, determine the status of each node, and use the predicted and true results to compare and determine Whether the current node fails, screen each node according to the status, determine the screening result, train the preconfigured initial model based on the screening result, determine the target model and distribute it to each node. In this way, abnormal points in the federated learning process can be eliminated. These points may have been attacked. Eliminating these abnormal nodes can greatly improve the security of federated learning and greatly improve work efficiency.

图5为本发明实施例提供的一种基于联邦学习的安全训练模型构建装置,该装置包括:获取模块501,处理模块502,确定模块503。FIG. 5 is an apparatus for constructing a security training model based on federated learning provided by an embodiment of the present invention. The apparatus includes: anacquisition module 501 , aprocessing module 502 , and adetermination module 503 .

获取模块501,用于获取各个节点上传的经过差分隐私处理之后的当前训练数据,其中,当前训练数据是各个节点对预配置在各个节点中的初始模型进行训练之后得到的;获取各个节点的对应的至少一个历史训练数据和历史降维差值;The obtainingmodule 501 is used to obtain the current training data uploaded by each node after differential privacy processing, wherein the current training data is obtained after each node performs training on the initial model preconfigured in each node; obtain the corresponding data of each node The difference between at least one historical training data and historical dimensionality reduction;

处理模块502,用于基于各个节点的当前训练数据、历史训练数据和历史降维差值,确定各个节点的状态;根据状态对各个节点进行筛选,确定筛选结果;Theprocessing module 502 is used for determining the status of each node based on the current training data, historical training data and historical dimensionality reduction difference of each node; screening each node according to the status to determine the screening result;

确定模块503,用于基于筛选结果,对预配置的初始模型进行训练,确定目标模型并分发至各个节点。The determiningmodule 503 is used for training the preconfigured initial model based on the screening result, determining the target model and distributing it to each node.

处理模块502,用于获取全部节点中第i个节点的历史训练数据;Theprocessing module 502 is used to obtain the historical training data of the i-th node in all nodes;

确定历史训练数据和当前训练数据的真实差值;Determine the true difference between historical training data and current training data;

将真实差值送入至预设的降维模型,确定降维之后的真实降维差值;Send the real difference to the preset dimensionality reduction model to determine the real dimensionality reduction difference after dimensionality reduction;

将第i个节点对应的至少一个历史降维差值送入预设的预测模型,确定预测降维差值,其中,i为正整数;Send at least one historical dimension reduction difference corresponding to the ith node into the preset prediction model, and determine the prediction dimension reduction difference, where i is a positive integer;

将预测降维差值送入至预设的增维模型,确定对应的预测差值;The predicted dimensionality reduction difference is sent to the preset dimensionality increase model, and the corresponding predicted difference is determined;

基于真实差值,预测差值,真实降维差值,预测降维差值,确定各个节点的状态。Based on the real difference, the predicted difference, the real dimensionality reduction difference, and the predicted dimensionality reduction difference, the status of each node is determined.

处理模块502,用于判断真实差值和预测差值之间的第一欧式距离与预设的第一阈值之间的关系,确定第一判断结果;Theprocessing module 502 is used for judging the relationship between the first Euclidean distance between the real difference and the predicted difference and the preset first threshold, and determining the first judgment result;

判断真实降维差值和预测降维差值之间的第二欧式距离与预设的第二阈值之间的关系,确定第二判断结果;Judging the relationship between the second Euclidean distance between the real dimensionality reduction difference and the predicted dimensionality reduction difference and the preset second threshold, and determining the second judgment result;

基于第一判断结果和第二判断结果,确定第i个节点的状态。Based on the first judgment result and the second judgment result, the state of the ith node is determined.

处理模块502,用于当第一欧式距离小于第一阈值,且第二欧式距离小于第二阈值时,确定第i个节点状态为第一状态;Theprocessing module 502 is configured to determine that the i-th node state is the first state when the first Euclidean distance is less than the first threshold and the second Euclidean distance is less than the second threshold;

当第一欧式距离大于第一阈值,且第二欧式距离小于第二阈值时,或第一欧式距离小于第一阈值,且第二欧式距离大于第二阈值时,确定第i个节点状态为第二状态;When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is less than the second threshold, or when the first Euclidean distance is less than the first threshold and the second Euclidean distance is greater than the second threshold, the state of the ith node is determined to be the ith node. two states;

当第一欧式距离大于第一阈值,且第二欧式距离大于第二阈值时,确定第i个节点状态为第三状态。When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is greater than the second threshold, it is determined that the state of the ith node is the third state.

确定模块503,用于筛选出各个节点状态为第一状态的节点和各个节点状态为第二状态的节点;Adetermination module 503, configured to filter out nodes whose states are in the first state and nodes whose states are in the second state;

基于各个节点状态为第一状态的节点的训练数据对预配置的模型进行训练,确定目标模型;The preconfigured model is trained based on the training data of the nodes whose state is the first state, and the target model is determined;

将目标模型分别发送至各个节点状态为第一状态的节点和各个节点状态为第二状态的节点。The target model is respectively sent to each node whose state is the first state and each node whose state is the second state.

确定模块503,用于筛选出各个节点状态为第三状态的节点;Adetermination module 503, configured to filter out the nodes whose state of each node is the third state;

向节点状态为第三状态的节点发送错误数据。Error data is sent to the node whose node state is the third state.

获取模块501,用于将初始模型分别发送给各个节点,以使节点对初始模型进行训练,并确定训练过程中产生的梯度数据;Theacquisition module 501 is used to send the initial model to each node respectively, so that the node trains the initial model and determines the gradient data generated in the training process;

接收各个节点上传的经过差分隐私处理之后的隐私梯度数据,将隐私梯度数据作为当前训练数据。Receive the privacy gradient data uploaded by each node after differential privacy processing, and use the privacy gradient data as the current training data.

请参阅图6,图6是本发明可选实施例提供的一种电子设备的结构示意图,如图6所示,该电子设备可以包括:至少一个处理器61,例如CPUPlease refer to FIG. 6, which is a schematic structural diagram of an electronic device provided by an optional embodiment of the present invention. As shown in FIG. 6, the electronic device may include: at least oneprocessor 61, such as a CPU

(Central Processing Unit,中央处理器),至少一个通信接口63,存储器64,至少一个通信总线62。其中,通信总线62用于实现这些组件之间的连接通信。其中,通信接口63可以包括显示屏(Display)、键盘(Keyboard),可选通信接口63还可以包括标准的有线接口、无线接口。存储器64可以是高速RAM存储器(Random Access Memory,易挥发性随机存取存储器),也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器64可选的还可以是至少一个位于远离前述处理器61的存储装置。其中处理器61可以结合图6所描述的装置,存储器64中存储应用程序,且处理器61调用存储器64中存储的程序代码,以用于执行上述任一方法步骤。(Central Processing Unit, central processing unit), at least onecommunication interface 63 ,memory 64 , and at least onecommunication bus 62 . Among them, thecommunication bus 62 is used to realize the connection and communication between these components. Thecommunication interface 63 may include a display screen (Display) and a keyboard (Keyboard), and theoptional communication interface 63 may also include a standard wired interface and a wireless interface. Thememory 64 may be a high-speed RAM memory (Random Access Memory, volatile random access memory), or may be a non-volatile memory (non-volatile memory), such as at least one disk memory. Thememory 64 can optionally also be at least one storage device located away from theaforementioned processor 61 . Theprocessor 61 can be combined with the device described in FIG. 6 , thememory 64 stores application programs, and theprocessor 61 calls the program codes stored in thememory 64 for executing any of the above method steps.

其中,通信总线62可以是外设部件互连标准(peripheral componentinterconnect,简称PCI)总线或扩展工业标准结构(extended industry standardarchitecture,简称EISA)总线等。通信总线62可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。Thecommunication bus 62 may be a peripheral component interconnect (PCI for short) bus or an extended industry standard architecture (EISA for short) bus or the like. Thecommunication bus 62 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.

其中,存储器64可以包括易失性存储器(英文:volatile memory),例如随机存取存储器(英文:random-access memory,缩写:RAM);存储器也可以包括非易失性存储器(英文:non-volatile memory),例如快闪存储器(英文:flash memory),硬盘(英文:hard diskdrive,缩写:HDD)或固态硬盘(英文:solid-state drive,缩写:SSD);存储器64还可以包括上述种类的存储器的组合。Thememory 64 may include volatile memory (English: volatile memory), such as random-access memory (English: random-access memory, abbreviation: RAM); the memory may also include non-volatile memory (English: non-volatile memory) memory), such as flash memory (English: flash memory), hard disk (English: hard diskdrive, abbreviation: HDD) or solid-state drive (English: solid-state drive, abbreviation: SSD); thememory 64 may also include the above-mentioned types of memory The combination.

其中,处理器61可以是中央处理器(英文:central processing unit,缩写:CPU),网络处理器(英文:network processor,缩写:NP)或者CPU和NP的组合。Theprocessor 61 may be a central processing unit (English: central processing unit, abbreviation: CPU), a network processor (English: network processor, abbreviation: NP), or a combination of CPU and NP.

其中,处理器61还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(英文:application-specific integrated circuit,缩写:ASIC),可编程逻辑器件(英文:programmable logic device,缩写:PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(英文:complex programmable logic device,缩写:CPLD),现场可编程逻辑门阵列(英文:field-programmable gate array,缩写:FPGA),通用阵列逻辑(英文:generic arraylogic,缩写:GAL)或其任意组合。Theprocessor 61 may further include a hardware chip. The above-mentioned hardware chip may be an application-specific integrated circuit (English: application-specific integrated circuit, abbreviation: ASIC), a programmable logic device (English: programmable logic device, abbreviation: PLD) or a combination thereof. The above-mentioned PLD may be a complex programmable logic device (English: complex programmable logic device, abbreviation: CPLD), a field programmable gate array (English: field-programmable gate array, abbreviation: FPGA), a general-purpose array logic (English: generic arraylogic , abbreviation: GAL) or any combination thereof.

可选地,存储器64还用于存储程序指令。处理器61可以调用程序指令,实现如本申请任一实施例中所示的方法。Optionally,memory 64 is also used to store program instructions. Theprocessor 61 may invoke program instructions to implement the method shown in any of the embodiments of the present application.

本发明实施例还提供了一种非暂态计算机存储介质,计算机存储介质存储有计算机可执行指令,该计算机可执行指令可执行上述任意方法实施例中的方法。其中,存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)、随机存储记忆体(RandomAccess Memory,RAM)、快闪存储器(Flash Memory)、硬盘(Hard Disk Drive,缩写:HDD)或固态硬盘(Solid-State Drive,SSD)等;存储介质还可以包括上述种类的存储器的组合。Embodiments of the present invention further provide a non-transitory computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions can execute the methods in any of the foregoing method embodiments. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a flash memory (Flash Memory), a hard disk (Hard Disk Drive, Abbreviation: HDD) or solid-state drive (Solid-State Drive, SSD), etc.; the storage medium may also include a combination of the above-mentioned types of memory.

虽然结合附图描述了本发明的实施例,但是本领域技术人员可以在不脱离本发明的精神和范围的情况下做出各种修改和变型,这样的修改和变型均落入由所附权利要求所限定的范围之内。Although the embodiments of the present invention have been described with reference to the accompanying drawings, various modifications and variations can be made by those skilled in the art without departing from the spirit and scope of the present invention, and such modifications and variations fall within the scope of the appended claims within the limits of the requirements.

Claims (10)

Translated fromChinese
1.一种基于联邦学习的安全训练模型构建方法,其特征在于,包括:1. a security training model construction method based on federated learning, is characterized in that, comprises:获取各个节点上传的经过差分隐私处理之后的当前训练数据,其中,所述当前训练数据是各个所述节点对预配置在各个节点中的初始模型进行训练之后得到的;obtaining the current training data uploaded by each node after differential privacy processing, wherein the current training data is obtained after each node performs training on an initial model preconfigured in each node;获取所述各个节点的对应的至少一个历史训练数据和历史降维差值;Obtain at least one corresponding historical training data and historical dimensionality reduction difference value of each node;基于各个所述节点的当前训练数据、历史训练数据和历史降维差值,确定各个所述节点的状态;Determine the state of each of the nodes based on the current training data, historical training data and historical dimensionality reduction differences of each of the nodes;根据所述状态对各个所述节点进行筛选,确定筛选结果;Screen each of the nodes according to the status, and determine the screening result;基于所述筛选结果,对所述预配置的初始模型进行训练,确定目标模型并分发至各个所述节点。Based on the screening results, the preconfigured initial model is trained, and the target model is determined and distributed to each of the nodes.2.根据权利要求1所述的方法,其特征在于,所述基于各个所述节点的当前训练数据、历史训练数据和历史降维差值,确定各个所述节点的状态,包括:2. The method according to claim 1, wherein the determining the state of each of the nodes based on the current training data, historical training data and historical dimensionality reduction difference values of each of the nodes, comprising:获取全部所述节点中第i个所述节点的历史训练数据;Obtain the historical training data of the i-th node in all the nodes;确定所述历史训练数据和所述当前训练数据的真实差值;determining the true difference between the historical training data and the current training data;将所述真实差值送入至预设的降维模型,确定降维之后的真实降维差值;The real difference is sent to a preset dimension reduction model, and the real dimension reduction difference after dimension reduction is determined;将第i个所述节点对应的至少一个历史降维差值送入预设的预测模型,确定预测降维差值,其中,i为正整数;Send at least one historical dimension reduction difference corresponding to the i-th node into a preset prediction model, and determine the prediction dimension reduction difference, where i is a positive integer;将所述预测降维差值送入至预设的增维模型,确定对应的预测差值;The predicted dimensionality reduction difference is sent to a preset dimensionality increase model, and the corresponding predicted difference is determined;基于所述真实差值,所述预测差值,所述真实降维差值,所述预测降维差值,确定各个所述节点的状态。Based on the real difference, the predicted difference, the real dimensionality reduction difference, and the predicted dimensionality reduction difference, the state of each of the nodes is determined.3.根据权利要求2所述的方法,其特征在于,所述基于所述真实差值,所述预测差值,所述真实降维差值,所述预测降维差值,确定各个所述节点的状态,包括:3. The method according to claim 2, wherein, based on the real difference, the predicted difference, the real dimensionality reduction difference, and the predicted dimensionality reduction difference, each of the The status of the node, including:判断所述真实差值和所述预测差值之间的第一欧式距离与预设的第一阈值之间的关系,确定第一判断结果;Judging the relationship between the first Euclidean distance between the real difference and the predicted difference and a preset first threshold, and determining a first judgment result;判断所述真实降维差值和所述预测降维差值之间的第二欧式距离与预设的第二阈值之间的关系,确定第二判断结果;Judging the relationship between the second Euclidean distance between the real dimensionality reduction difference and the predicted dimensionality reduction difference and a preset second threshold, and determining a second judgment result;基于所述第一判断结果和所述第二判断结果,确定第i个所述节点的状态。Based on the first judgment result and the second judgment result, the state of the i-th node is determined.4.根据权利要求3所述的方法,其特征在于,所述基于所述第一判断结果和所述第二判断结果,确定第i个所述节点的状态,包括:4. The method according to claim 3, wherein the determining the state of the i-th node based on the first judgment result and the second judgment result comprises:当第一欧式距离小于所述第一阈值,且所述第二欧式距离小于所述第二阈值时,确定第i个所述节点状态为第一状态;When the first Euclidean distance is less than the first threshold, and the second Euclidean distance is less than the second threshold, determining that the i-th node state is the first state;当第一欧式距离大于所述第一阈值,且所述第二欧式距离小于所述第二阈值时,或第一欧式距离小于所述第一阈值,且所述第二欧式距离大于所述第二阈值时,确定第i个所述节点状态为第二状态;When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is less than the second threshold, or the first Euclidean distance is less than the first threshold and the second Euclidean distance is greater than the first Euclidean distance When there are two thresholds, it is determined that the state of the i-th node is the second state;当第一欧式距离大于所述第一阈值,且所述第二欧式距离大于所述第二阈值时,确定第i个所述节点状态为第三状态。When the first Euclidean distance is greater than the first threshold and the second Euclidean distance is greater than the second threshold, it is determined that the i-th node state is the third state.5.根据权利要求1所述的方法,其特征在于,所述基于所述筛选结果,对所述预配置的初始模型进行训练,确定目标模型并分发至各个所述节点,包括:5. The method according to claim 1, wherein, based on the screening result, the preconfigured initial model is trained, and the target model is determined and distributed to each of the nodes, comprising:筛选出各个所述节点状态为第一状态的节点和所述各个所述节点状态为第二状态的节点;Screening out each node whose state is the first state and each node whose state is the second state;基于各个所述节点状态为第一状态的节点的训练数据对所述预配置的模型进行训练,确定目标模型;The preconfigured model is trained based on the training data of each node whose state is the first state, and a target model is determined;将所述目标模型分别发送至各个所述节点状态为第一状态的节点和各个所述节点状态为第二状态的节点。The target model is respectively sent to each node whose state is the first state and each node whose state is the second state.6.根据权利要求5所述的方法,其特征在于,还包括:6. The method of claim 5, further comprising:筛选出各个所述节点状态为第三状态的节点;Screening out the nodes whose state is the third state;向所述节点状态为第三状态的节点发送错误数据。Error data is sent to the node whose node state is the third state.7.根据权利要求1所述的方法,其特征在于,所述获取各个节点上传的经过差分隐私处理之后的当前训练数据,包括:7. The method according to claim 1, wherein the acquiring the current training data uploaded by each node after differential privacy processing comprises:将所述初始模型分别发送给各个所述节点,以使所述节点对所述初始模型进行训练,并确定训练过程中产生的梯度数据;sending the initial model to each of the nodes respectively, so that the nodes can train the initial model and determine the gradient data generated in the training process;接收各个节点上传的经过差分隐私处理之后的所述隐私梯度数据,将所述隐私梯度数据作为当前训练数据。The privacy gradient data uploaded by each node after differential privacy processing is received, and the privacy gradient data is used as the current training data.8.一种基于联邦学习的安全训练模型构建装置,其特征在于,包括:8. An apparatus for constructing a security training model based on federated learning, comprising:获取模块,用于获取各个节点上传的经过差分隐私处理之后的当前训练数据,其中,所述当前训练数据是各个所述节点对预配置在各个节点中的初始模型进行训练之后得到的;获取所述各个节点的对应的至少一个历史训练数据和历史降维差值;an acquisition module, configured to acquire the current training data uploaded by each node after differential privacy processing, wherein the current training data is obtained after each node performs training on the initial model preconfigured in each node; Describe the corresponding at least one historical training data of each node and the historical dimensionality reduction difference;处理模块,用于基于各个所述节点的当前训练数据、历史训练数据和历史降维差值,确定各个所述节点的状态;根据所述状态对各个所述节点进行筛选,确定筛选结果;a processing module, configured to determine the status of each of the nodes based on the current training data, historical training data and historical dimensionality reduction difference values of each of the nodes; screen each of the nodes according to the status to determine the screening result;确定模块,用于基于所述筛选结果,对所述预配置的初始模型进行训练,确定目标模型并分发至各个所述节点。A determination module, configured to train the preconfigured initial model based on the screening result, determine a target model and distribute it to each of the nodes.9.一种电子设备,其特征在于,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行如权利要求1-7任一所述的方法的步骤。9. An electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, The instructions are executed by the at least one processor to cause the at least one processor to perform the steps of the method of any of claims 1-7.10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的方法的步骤。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps of the method according to any one of claims 1-7 are implemented.
CN202210340718.XA2022-03-312022-03-31Safe training model construction method, device and system based on federal learningPendingCN114742143A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210340718.XACN114742143A (en)2022-03-312022-03-31Safe training model construction method, device and system based on federal learning

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210340718.XACN114742143A (en)2022-03-312022-03-31Safe training model construction method, device and system based on federal learning

Publications (1)

Publication NumberPublication Date
CN114742143Atrue CN114742143A (en)2022-07-12

Family

ID=82278947

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210340718.XAPendingCN114742143A (en)2022-03-312022-03-31Safe training model construction method, device and system based on federal learning

Country Status (1)

CountryLink
CN (1)CN114742143A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116257972A (en)*2022-11-292023-06-13元始智能科技(南通)有限公司Equipment state evaluation method and system based on field self-adaption and federal learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112256874A (en)*2020-10-212021-01-22平安科技(深圳)有限公司Model training method, text classification method, device, computer equipment and medium
CN112434280A (en)*2020-12-172021-03-02浙江工业大学Block chain-based federal learning defense method
CN112651511A (en)*2020-12-042021-04-13华为技术有限公司Model training method, data processing method and device
CN112749392A (en)*2021-01-072021-05-04西安电子科技大学Method and system for detecting abnormal nodes in federated learning
US20210150269A1 (en)*2019-11-182021-05-20International Business Machines CorporationAnonymizing data for preserving privacy during use for federated machine learning
CN113467927A (en)*2021-05-202021-10-01杭州趣链科技有限公司Block chain based trusted participant federated learning method and device
CN113962988A (en)*2021-12-082022-01-21东南大学 Anomaly detection method and system for power inspection images based on federated learning
CN114169412A (en)*2021-11-232022-03-11北京邮电大学Federal learning model training method for large-scale industrial chain privacy calculation
CN114219147A (en)*2021-12-132022-03-22南京富尔登科技发展有限公司Power distribution station fault prediction method based on federal learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20210150269A1 (en)*2019-11-182021-05-20International Business Machines CorporationAnonymizing data for preserving privacy during use for federated machine learning
CN112256874A (en)*2020-10-212021-01-22平安科技(深圳)有限公司Model training method, text classification method, device, computer equipment and medium
CN112651511A (en)*2020-12-042021-04-13华为技术有限公司Model training method, data processing method and device
CN112434280A (en)*2020-12-172021-03-02浙江工业大学Block chain-based federal learning defense method
CN112749392A (en)*2021-01-072021-05-04西安电子科技大学Method and system for detecting abnormal nodes in federated learning
CN113467927A (en)*2021-05-202021-10-01杭州趣链科技有限公司Block chain based trusted participant federated learning method and device
CN114169412A (en)*2021-11-232022-03-11北京邮电大学Federal learning model training method for large-scale industrial chain privacy calculation
CN113962988A (en)*2021-12-082022-01-21东南大学 Anomaly detection method and system for power inspection images based on federated learning
CN114219147A (en)*2021-12-132022-03-22南京富尔登科技发展有限公司Power distribution station fault prediction method based on federal learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周俊;方国英;吴楠;: "联邦学习安全与隐私保护研究综述", 西华大学学报(自然科学版), no. 04, 10 July 2020 (2020-07-10)*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116257972A (en)*2022-11-292023-06-13元始智能科技(南通)有限公司Equipment state evaluation method and system based on field self-adaption and federal learning
CN116257972B (en)*2022-11-292024-02-20元始智能科技(南通)有限公司Equipment state evaluation method and system based on field self-adaption and federal learning

Similar Documents

PublicationPublication DateTitle
CN109074519B (en)Information processing apparatus, information processing method, and program
CN113608916B (en) Method, device, electronic device and storage medium for fault diagnosis
CN106209862A (en)A kind of steal-number defence implementation method and device
CN109754359A (en) A method and system for pooling processing applied to convolutional neural networks
CN110837432A (en)Method and device for determining abnormal node in service cluster and monitoring server
CN114520736A (en)Internet of things security detection method, device, equipment and storage medium
CN114742143A (en)Safe training model construction method, device and system based on federal learning
TWI727639B (en) Method and device for tracing block chain transactions
CN113791792A (en)Application calling information acquisition method and device and storage medium
CN114679335A (en) Power monitoring system network security risk assessment training, assessment method and equipment
WO2025016349A1 (en)Abnormal data detection method and apparatus, and storage medium
CN117114087B (en)Fault prediction method, computer device, and readable storage medium
US20240345934A1 (en)Systems, apparatuses, methods, and computer program products for generating one or more monitoring operations
CN111049877A (en)Big data external output method and device and data open platform
US20170220408A1 (en)Interactive multi-level failsafe enablement
CN110381035A (en)Network security test method, device, computer equipment and readable storage medium storing program for executing
CN116506276A (en)Mining method and system for relevance of alarm data
CN115361231A (en)Access baseline-based host abnormal traffic detection method, system and equipment
CN114039765A (en)Safety management and control method and device for power distribution Internet of things and electronic equipment
CN114363005A (en) ICMP detection method, system, device and medium based on machine learning
CN111224916B (en) A method and device for DDOS attack detection
CN114980106B (en) A data double security screening system based on edge computing
CN115187787B (en) Method and device for local manifold enhancement for self-supervised multi-view representation learning
CN113055339B (en)Process data processing method and device, storage medium and computer equipment
CN110311930B (en)Identification method and device for remote control loop connection behavior and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp