Movatterモバイル変換


[0]ホーム

URL:


CN111598254B - Federated learning modeling method, device and readable storage medium - Google Patents

Federated learning modeling method, device and readable storage medium
Download PDF

Info

Publication number
CN111598254B
CN111598254BCN202010445868.8ACN202010445868ACN111598254BCN 111598254 BCN111598254 BCN 111598254BCN 202010445868 ACN202010445868 ACN 202010445868ACN 111598254 BCN111598254 BCN 111598254B
Authority
CN
China
Prior art keywords
parameter
verification
model
parameters
encryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010445868.8A
Other languages
Chinese (zh)
Other versions
CN111598254A (en
Inventor
李月
蔡杭
范力欣
张天豫
吴锦和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co LtdfiledCriticalWeBank Co Ltd
Priority to CN202010445868.8ApriorityCriticalpatent/CN111598254B/en
Publication of CN111598254ApublicationCriticalpatent/CN111598254A/en
Priority to PCT/CN2020/135032prioritypatent/WO2021232754A1/en
Application grantedgrantedCritical
Publication of CN111598254BpublicationCriticalpatent/CN111598254B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种联邦学习建模方法、设备及可读存储介质,所述联邦学习建模方法包括:接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数,进而基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果,进而基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模。本申请解决了联邦学习建模效率低且精确度差的技术问题。

Figure 202010445868

The present application discloses a federated learning modeling method, a device, and a readable storage medium. The federated learning modeling method includes: receiving encrypted model parameters sent by each second device and verification parameters corresponding to the encrypted model parameters, and then Based on each of the verification parameters, zero-knowledge verification is performed on each of the encryption model parameters, so as to determine false encryption model parameters in each of the encryption model parameters, and a zero-knowledge verification result is obtained, and then based on the zero-knowledge verification result and Each of the encryption model parameters coordinates each of the second devices to perform federated learning modeling. This application solves the technical problems of low efficiency and poor accuracy of federated learning modeling.

Figure 202010445868

Description

Federal learning modeling method, device and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence in financial technology (Fintech), and in particular, to a method and apparatus for federated learning modeling, and a readable storage medium.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the federal learning modeling development is more and more mature, at present, each participant of the federal learning modeling generally feeds back own cryptographic model parameters to a coordinator of the federal learning modeling, so that each cryptographic model parameter of the coordinator is aggregated, and the aggregated parameters are fed back to each participant to perform the federal learning modeling, but if a malicious participant provides false cryptographic model parameters in the training process, the overall model quality of the federal model obtained by the federal learning modeling is directly influenced, even the whole federal learning modeling process is disabled, so that the efficiency and the accuracy of the federal learning modeling are low.
Disclosure of Invention
The application mainly aims to provide a federated learning modeling method, equipment and a readable storage medium, and aims to solve the technical problems of low federated learning modeling efficiency and poor accuracy in the prior art.
In order to achieve the above object, the present application provides a federal learning modeling method, where the federal learning modeling method is applied to a first device, and the federal learning modeling method includes:
receiving encryption model parameters sent by each second device and verification parameters corresponding to the encryption model parameters;
based on each verification parameter, respectively carrying out zero knowledge verification on each encryption model parameter so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result;
and coordinating each second device to carry out federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
Optionally, the step of coordinating each second device to perform federal learning modeling based on the zero-knowledge verification result and each cryptographic model parameter includes:
based on the zero-knowledge verification result, eliminating the false encryption model parameters from the encryption model parameters to obtain the credible model parameters;
and aggregating the credible model parameters to obtain aggregated parameters, and feeding the aggregated parameters back to the second equipment respectively so that the second equipment can update the local training models thereof until the local training models reach the preset training ending condition.
Optionally, the zero-knowledge verification is performed on each encryption model parameter based on each verification parameter, so as to determine a false encryption model parameter in each encryption model parameter, and the step of obtaining a zero-knowledge verification result includes:
respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter;
and respectively verifying whether each encryption model parameter is a false encryption model parameter or not based on each first zero knowledge proof result and each second zero knowledge proof result to obtain a zero knowledge verification result.
Optionally, the verification parameters include verification model parameters and verification random parameters,
the step of calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter respectively includes:
carrying out validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result;
and encrypting each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result.
Optionally, the preset verification challenge parameters include a first verification challenge parameter and a second verification challenge parameter;
the step of performing validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result comprises:
performing exponentiation operation on the first verification challenge parameter and each encryption model parameter respectively to obtain a first exponentiation operation result corresponding to each encryption model parameter;
performing exponentiation operation on the second verification challenge parameter and each encryption model parameter respectively to obtain a second exponentiation operation result corresponding to each encryption model parameter;
generating each of the first zero knowledge verification results based on each of the first exponentiation results and each of the second exponentiation results.
Optionally, the step of verifying whether each of the cryptographic model parameters is a dummy cryptographic model parameter based on each of the first zero knowledge proof results and each of the second zero knowledge proof results respectively includes:
comparing the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are not consistent, determining that the encryption model parameter is the false encryption model parameter;
and if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are consistent, judging that the encryption model parameter is not the false encryption model parameter.
In order to achieve the above object, the present application further provides a federal learning modeling method, where the federal learning modeling method is applied to a second device, and the federal learning modeling method includes:
acquiring a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to obtain an encrypted model parameter;
generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter and a preset verification challenge parameter;
sending the encryption model parameter, the verification model parameter and the second verification random parameter to first equipment so that the first equipment can carry out zero knowledge verification to obtain a zero knowledge verification result;
and receiving an aggregation parameter fed back by the first equipment based on the zero knowledge verification result and the encryption model parameter, and updating a local training model corresponding to the model training parameter based on the aggregation parameter until the local training model reaches a preset training end condition.
Optionally, the model training parameters comprise current model parameters and auxiliary model parameters,
the step of obtaining model training parameters comprises:
performing iterative training on a local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and acquiring the current model parameters of the local training model;
and acquiring the prior model parameters of the local training model, and generating the auxiliary model parameters based on the prior model parameters.
The application also provides a federal study modeling device, federal study modeling device is virtual device, just federal study modeling device is applied to first equipment, federal study modeling device includes:
the receiving module is used for receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters;
the zero knowledge verification module is used for respectively performing zero knowledge verification on each encryption model parameter based on each verification parameter so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result;
and the coordination module is used for coordinating each second device to carry out federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
Optionally, the coordination module comprises:
the eliminating submodule is used for eliminating the false encryption model parameters from the encryption model parameters based on the zero-knowledge verification result to obtain the credible model parameters;
and the aggregation sub-module is used for aggregating the credible model parameters to obtain aggregation parameters, and feeding the aggregation parameters back to the second equipment respectively so that the second equipment can update the local training models thereof until the local training models reach the preset training end conditions.
Optionally, the zero knowledge verification module comprises:
the calculation submodule is used for respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter;
and the zero knowledge verification submodule is used for respectively verifying whether each encryption model parameter is a false encryption model parameter or not based on each first zero knowledge proof result and each second zero knowledge proof result so as to obtain a zero knowledge verification result.
Optionally, the computation submodule includes:
the validity verification unit is used for verifying the validity of each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result;
and the encryption unit is used for encrypting each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result.
Optionally, the validity verifying unit includes:
the first power operation subunit is configured to perform power operation on the first verification challenge parameter and each encryption model parameter, respectively, to obtain a first power operation result corresponding to each encryption model parameter;
a second exponentiation subunit, configured to perform exponentiation operation on the second verification challenge parameter and each encryption model parameter, respectively, to obtain a second exponentiation operation result corresponding to each encryption model parameter;
a generating subunit, configured to generate each first zero knowledge verification result based on each first power operation result and each second power operation result.
Optionally, the zero knowledge verification sub-module includes:
a comparison unit, configured to compare the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
a first determining unit, configured to determine that the cryptographic model parameter is the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent;
a second determining unit, configured to determine that the cryptographic model parameter is not the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are consistent.
In order to achieve the above object, the present application further provides a federal learning modeling apparatus, the federal learning modeling apparatus is a virtual apparatus, and the federal learning modeling apparatus is applied to a second device, the federal learning modeling apparatus includes:
the encryption module is used for acquiring a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to acquire an encrypted model parameter;
the generation module is used for generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters;
the sending module is used for sending the encryption model parameter, the verification model parameter and the second verification random parameter to first equipment so as to enable the first equipment to carry out zero knowledge verification and obtain a zero knowledge verification result;
and the model updating module is used for receiving the aggregation parameters fed back by the first equipment based on the zero knowledge verification result and the encryption model parameters, and updating the local training model corresponding to the model training parameters based on the aggregation parameters until the local training model reaches a preset training ending condition.
Optionally, the encryption module includes:
the obtaining submodule is used for carrying out iterative training on a local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and obtaining the current model parameters of the local training model;
and the generation submodule is used for acquiring the previous model parameters of the local training model and generating the auxiliary model parameters based on the previous model parameters.
The application also provides a federal learning modeling equipment, federal learning modeling equipment is entity equipment, federal learning modeling equipment includes: a memory, a processor, and a program of the federal learning modeling method stored in the memory and executable on the processor, the program of the federal learning modeling method being executable by the processor to perform the steps of the federal learning modeling method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing the federal learning modeling method, the program implementing the steps of the federal learning modeling method as described above when executed by a processor.
According to the method and the device, zero knowledge verification is respectively carried out on each encryption model parameter based on each verification parameter by receiving the encryption model parameter sent by each second device and the verification parameter corresponding to the encryption model parameter, so that a false encryption model parameter is determined in each encryption model parameter to obtain a zero knowledge verification result, and then each second device is coordinated to carry out federal learning modeling based on the zero knowledge verification result. That is, after receiving the encryption model parameters and the verification parameters corresponding to the encryption model parameters sent by each second device, the present application performs zero knowledge verification on each encryption model parameter based on each verification parameter, so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result, and further, based on the zero knowledge verification result, the false encryption model parameter can be removed from each encryption model parameter to coordinate each second device to perform federal learning modeling. That is, the application provides a method for determining false encryption model parameters in each local model based on zero-knowledge proof, and then when a malicious participant provides the false encryption model parameters in the training process, the false encryption model parameters can be accurately identified and eliminated, so that the situation that the Federal learning modeling is carried out based on the encryption model parameters mixed with the false encryption model parameters is avoided, the overall model quality of the Federal model obtained through the Federal learning modeling is improved, the efficiency and the accuracy of the Federal learning modeling are improved, and the technical problems of low Federal learning modeling efficiency and poor accuracy are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a first embodiment of a federated learning modeling method of the present application;
FIG. 2 is a schematic flow chart diagram of a second embodiment of the Federal learning modeling method of the present application;
FIG. 3 is a schematic flow chart diagram of a third embodiment of the Federal learning modeling method of the present application;
fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the federal learning modeling method of the present application, the federal learning modeling method is applied to a first device, and referring to fig. 1, the federal learning modeling method includes:
step S10, receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters;
in this embodiment, it should be noted that, before performing federal learning modeling, the first device performs negotiation interaction with each of the second devices to determine standard verification challenge parameters, where the number of the standard verification challenge parameters may be determined during the negotiation interaction. The first device is a coordinator performing federated learning modeling and is used for coordinating the second devices to perform federated learning modeling, the second devices are participants performing federated learning modeling, the encryption model parameters are model training parameters encrypted based on a homomorphic encryption algorithm, for example, assuming that a public key of the participants held by the second devices is P, and a first verification random parameter used for homomorphic encryption is r1Encrypting model parameter h after encrypting model training parameter m based on homomorphic encryption algorithmm=Enc(P,m,r1) Where Enc is a homomorphic encrypted symbol, and further, the model training parameter is a model parameter of a local training model of the second device, for example, assuming that the local training model is a linear model and an expression is Y ═ β01x12x2+…+βnxnThen the model parameter is the vector (β)012+…+βn)。
Additionally, it should be noted that the verification parameter is a parameter for zero-knowledge proof, wherein the verification parameter includes a second verification random parameter and a verification model parameter, wherein the verification model parameter is generated by the second device based on a model training parameter and a verification challenge parameter, for example, assuming that the verification challenge parameter includes a parameter x1And x2If the model training parameter is m, the verification model parameter is n ═ m × x1+m*x2Further, the second verification random parameter is generated by the second device based on the first verification random parameter and the verification challenge parameter, e.g., assuming that the first verification random parameter is r1The verification challenge parameter comprises a parameter x1And parameter x2Then the second verification random parameter r2=r1x1*r1x2
Additionally, it should be noted that a malicious party in each of the parties may modify the cryptographic parameters of the homomorphic encryption of the model training parameters to achieve the purpose of generating the false cryptographic model parameters, and when the coordinator performs federated modeling, the cryptographic model parameters sent by each of the second devices are usually directly aggregated to obtain the aggregated parameters, and if the false cryptographic model parameters exist in each of the cryptographic model parameters, the efficiency and accuracy of federated model training will be affected, for example, if the cryptographic model parameters sent by the second device a are 5a, if the second device B is a malicious party, the sent false cryptographic model parameters are 100B, and if the aggregation process is a weighted average, the first aggregated parameters obtained after aggregation process are (5a +100B)/2, if the second device B is not a malicious party, if the sent encryption model parameter is 5b, the second aggregation parameter obtained after aggregation processing is (5a +5b)/2, and therefore, if a malicious party exists in each second device, the difference between the second aggregation parameter obtained by the first device performing aggregation processing on the encryption model parameter sent by each second device and the first set parameter obtained when no malicious party exists is extremely large, and further, if a malicious party exists in each second device, the efficiency and the accuracy of federal model training are greatly influenced.
Additionally, it should be noted that, in this embodiment, the first device and the second device both perform encryption based on a homomorphic encryption algorithm, where in an implementable scheme, the homomorphic encryption algorithm should satisfy the following properties:
c ═ Enc (PK, m, r), and for C1=Enc(PK,m1,r1) And C) and2=Enc(PK,m2,r2) And satisfies the following conditions:
Figure BDA0002504869740000091
wherein, C, C1And C2All are parameters to be encrypted after encryption, PK is an encrypted secret key, m and m1And m2As the parameters to be encrypted, the parameters are encrypted,r、r1and r2The random number required for encryption.
Step S20, based on each verification parameter, respectively performing zero-knowledge verification on each encryption model parameter to determine a false encryption model parameter in each encryption model parameter and obtain a zero-knowledge verification result;
in this embodiment, based on each verification parameter, performing zero-knowledge verification on each encryption model parameter, respectively, to determine a dummy encryption model parameter in each encryption model parameter, and obtain a zero-knowledge verification result, specifically, based on each verification parameter, calculating a first zero-knowledge proof result and a second zero-knowledge proof result corresponding to each encryption model parameter, respectively verifying whether each encryption model parameter is a dummy encryption model parameter, and determining a dummy encryption model parameter in each encryption model parameter, and obtaining a zero-knowledge verification result, based on a first zero-knowledge proof result and a second zero-knowledge proof result corresponding to each encryption model parameter, where the zero-knowledge verification result is a verification result of whether each encryption model parameter is a dummy encryption model parameter, and the dummy encryption model parameter is a model training parameter in which a malicious party maliciously encrypts the model training parameter, and the malicious encryption is achieved, for example, by changing the encryption parameters when homomorphic encryption is performed.
Wherein, the zero knowledge verification is respectively carried out on each encryption model parameter based on each verification parameter so as to determine a false encryption model parameter in each encryption model parameter, and the step of obtaining the zero knowledge verification result comprises the following steps:
step S21, respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each of the verification parameters;
in this embodiment, it should be noted that the verification parameters include a verification model parameter and a second verification random parameter.
Respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter, specifically, for each verification parameter, executing the following steps:
performing homomorphic addition operation on the encryption model parameter based on a preset verification challenge parameter to obtain a first zero knowledge proof result, and performing homomorphic encryption operation on the verification model parameter based on a preset coordinator public key and a second verification random parameter to obtain a second zero knowledge proof result, for example, assuming that the preset verification challenge parameter is x1And x2The cryptographic model parameter is hm=Enc(P1,m,r1) Wherein P is1Is a participant public key, r1For the first verification random parameter, m is the model training parameter, and n is the verification model parameter m x1+m*x2And is and
Figure BDA0002504869740000101
and the first zero knowledge verification result is
Figure BDA0002504869740000102
The second zero knowledge verification result is
Figure BDA0002504869740000103
Wherein, P2Is a coordinator public key, and if each of the second devices is maliciously modified, the participant public key and the coordinator public key should be identical.
Step S22, based on each of the first zero knowledge proof results and each of the second zero knowledge proof results, respectively verifying whether each of the encryption model parameters is a dummy encryption model parameter, to obtain a zero knowledge verification result.
In this embodiment, based on each of the first zero knowledge proof results and each of the second zero knowledge proof results, whether each of the encryption model parameters is a dummy encryption model parameter is verified, and a zero knowledge verification result is obtained, specifically, the following steps are performed for each of the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the verification parameters:
comparing the first zero knowledge proof result with the second zero knowledge proof result, determining whether the first zero knowledge proof result is consistent with the second zero knowledge proof result, if so, determining that the second device does not make malicious modification when performing homomorphic encryption on the model training parameters, that is, the encryption model parameters are not false encryption model parameters, and if not, determining that the second device makes malicious modification when performing homomorphic encryption on the model training parameters, that is, the encryption model parameters are false encryption model parameters.
Wherein the step of verifying whether each of the cryptographic model parameters is a dummy cryptographic model parameter based on each of the first zero knowledge proof results and each of the second zero knowledge proof results, respectively, comprises:
step S221, comparing the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
in this embodiment, the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the encryption model parameters are respectively compared, and specifically, the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the encryption model parameters are respectively compared to respectively calculate a difference between the first zero knowledge proof result and the second zero knowledge proof result corresponding to each of the encryption model parameters.
Step S222, if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent, determining that the cryptographic model parameter is the false cryptographic model parameter;
in this embodiment, if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent, it is determined that the cryptographic model parameter is the false cryptographic model parameter, specifically, if the difference is not 0, it is determined that the second device corresponding to the cryptographic model parameter performs malicious encryption on the model training parameter, and then it is determined that the cryptographic model parameter is the false cryptographic model parameter, and then a false identifier is given to the cryptographic model parameter, so as to identify the cryptographic model parameter as the false cryptographic model parameter.
Step S223, if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the encryption model parameter are consistent, determining that the encryption model parameter is not the false encryption model parameter.
In this embodiment, if the first zero-knowledge proof result and the second zero-knowledge proof result corresponding to the cryptographic model parameter are consistent, it is determined that the cryptographic model parameter is not the false cryptographic model parameter, specifically, if the difference is 0, it is determined that the second device corresponding to the cryptographic model parameter does not perform malicious encryption on the model training parameter, and then it is determined that the cryptographic model parameter is not the false cryptographic model parameter, and a trusted identifier is given to the cryptographic model parameter, so as to identify the cryptographic model parameter as the trusted model parameter.
And step S30, coordinating each second device to perform federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
In this embodiment, it should be noted that the zero-knowledge verification result includes a calibration identifier corresponding to each encryption model parameter, where the calibration identifier is an identifier for identifying whether the encryption model parameter is a false encryption model parameter.
And coordinating each second device to perform federated learning modeling based on the zero-knowledge verification result and each encryption model parameter, specifically, based on each calibration identifier, eliminating false encryption model parameters from each encryption model parameter to obtain each credible model parameter, further performing aggregation processing on each credible model parameter to obtain an aggregation parameter, and further coordinating each second device to perform federated learning modeling based on each aggregation model parameter.
The step of coordinating each second device to perform federated learning modeling based on the zero-knowledge verification result and each encryption model parameter includes:
step S31, based on the zero-knowledge verification result, eliminating the false encryption model parameters from the encryption model parameters to obtain the credible model parameters;
in this embodiment, it should be noted that after finding the false cryptographic model parameter, the coordinator may penalize a malicious party corresponding to the false cryptographic model parameter according to a preset incentive mechanism or cancel a subsequent qualification of the malicious party to participate in federal learning modeling.
Step S32, performing aggregation processing on each of the trusted model parameters to obtain aggregated parameters, and feeding back the aggregated parameters to each of the second devices, so that each of the second devices updates its own local training model until the local training model reaches a preset training end condition.
In this embodiment, each trusted model parameter is aggregated to obtain an aggregation parameter, and the aggregation parameter is fed back to each second device, so that each second device updates its local training model until the local training model reaches a preset training end condition, specifically, each trusted model parameter is aggregated to obtain an aggregation parameter based on a preset aggregation processing rule, where the preset aggregation processing rule includes weighted averaging, summation, and the like, and further, each aggregation parameter is sent to each second device, so that each second device decrypts the aggregation parameter based on a participant private key corresponding to the participant public key to obtain a decrypted aggregation parameter, and updates a local training model held by its own party based on the decrypted aggregation parameter, obtaining an updated local training model, judging whether the updated local training model reaches a preset training end condition, if the updated local training model reaches the preset training end condition, judging to complete the task of federal learning modeling, if the updated local training model does not reach the preset training end condition, performing iterative training on the local training model again, and if the updated local training model reaches the preset iteration threshold value, re-obtaining model training parameters of the local training model, re-encrypting and sending the model training parameters to the coordinator to re-perform federal training until the local training model reaches the preset training end condition, wherein the training end condition comprises reaching a preset maximum iteration, and a loss function corresponding to the local training model converges and the like.
Further, the local training model includes a wind control model, wherein the wind control model is a machine learning model for evaluating the loan risk of the user, and when a malicious party exists in each of the parties, the false encryption model parameters sent by the malicious party and the encryption model parameters sent by a normal party in each of the parties are aggregated to obtain an error aggregation parameter which is greatly different from an accurate aggregation parameter, and each of the second devices updates the wind control model based on the error aggregation parameter to reduce the accuracy of the wind control model in evaluating the loan risk of the user, and further based on the federal learning modeling method in the present application, the false encryption model parameters can be screened and eliminated from the encryption model parameters sent by each of the parties to obtain credible encryption model parameters, so that in the whole federal learning modeling process, the wind control model is updated always based on the aggregation parameters obtained by aggregating the parameters of the credible encryption models, so that the assessment of the user loan risk by the wind control model is more accurate, namely, the loan risk assessment accuracy of the wind control model is improved.
In this embodiment, zero knowledge verification is performed on each encryption model parameter respectively based on each verification parameter by receiving the encryption model parameter sent by each second device and the verification parameter corresponding to the encryption model parameter, so as to determine a false encryption model parameter in each encryption model parameter, obtain a zero knowledge verification result, and then coordinate each second device to perform federal learning modeling based on the zero knowledge verification result. That is, in this embodiment, after receiving the encryption model parameters and the verification parameters corresponding to the encryption model parameters sent by each second device, based on each verification parameter, zero knowledge verification is performed on each encryption model parameter, so as to determine a false encryption model parameter in each encryption model parameter, and obtain a zero knowledge verification result, further, based on the zero knowledge verification result, the false encryption model parameter may be removed from each encryption model parameter, so as to coordinate each second device to perform federal learning modeling. That is, the embodiment provides a method for determining false encryption model parameters in each local model based on zero-knowledge proof, and then when a malicious participant provides the false encryption model parameters in the training process, the false encryption model parameters can be accurately identified and eliminated, so that the occurrence of the situation that the federal learning modeling is performed based on each encryption model parameter mixed with the false encryption model parameters is avoided, the overall model quality of the federal model obtained through the federal learning modeling is improved, the efficiency and accuracy of the federal learning modeling are improved, and the technical problems of low efficiency and poor accuracy of the federal learning modeling are solved.
Further, referring to fig. 2, based on the first embodiment in the present application, in another embodiment of the present application, the verification parameters include a verification model parameter and a verification random parameter,
the step of calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter respectively includes:
step S211, based on preset verification challenge parameters, performing validity verification on each encryption model parameter to obtain each first zero knowledge verification result;
in this embodiment, it should be noted that the preset verification challenge parameter includes a first verification challenge parameter and a second verification challenge parameter, and the encryption model parameter includes a first encryption model parameter and a second encryption model parameter, where the first encryption model parameter is an encryption parameter after the second device performs homomorphic encryption on the current model parameter, the second encryption model parameter is an encryption parameter after the second device performs homomorphic encryption on the previous model parameter, where the current model parameter is a model parameter extracted when the local training model reaches a preset training iteration threshold value during the federation of the current round, and the previous model parameter is a model parameter extracted when the local training model reaches a preset training iteration threshold valueThe model parameter is a model parameter based on the previous round of federation before the present round of federation, for example, historical model parameters corresponding to the previous three rounds of federation are taken, and the historical model parameters are weighted and averaged to obtain the previous model parameter, for example, assuming that the current model parameter is m and the previous model parameter is m0Then the first cryptographic model parameter h0=Enc(P,m0,r1) Where P is the participant public key, r1For the first authentication random parameter, the second cryptographic model parameter hm=Enc(P,m,r2) Wherein r is2Is the second authentication random parameter.
Based on preset verification challenge parameters, performing validity verification on each encryption model parameter to obtain each first zero knowledge verification result, and specifically, executing the following steps for each encryption model parameter:
respectively performing exponentiation operation on the first encryption model parameter and the second encryption model parameter and summing based on the first verification challenge parameter and the second verification challenge parameter to obtain a first zero-knowledge verification result, for example, assuming a first encryption model parameter h0=Enc(P,m0,r1) Second cryptographic model parameter hm=Enc(P,m,r2) The first verification challenge parameter is x1The second verification challenge parameter is x2Then said first zero knowledge proof result
Figure BDA0002504869740000141
Based on the property of homomorphic encryption algorithm, the method can obtain
Figure BDA0002504869740000142
Where P is the participant public key, x1For the first verification challenge parameter, x2For the second verification challenge parameter, r1For the first authentication random parameter, r2For the second verification random parameter, the current model parameter is m, and the previous model parameter is m0
The preset verification challenge parameters comprise a first verification challenge parameter and a second verification challenge parameter;
the step of performing validity verification on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result comprises:
step A10, performing exponentiation operation on the first verification challenge parameter and each encryption model parameter respectively to obtain a first exponentiation operation result corresponding to each encryption model parameter;
in this embodiment, the first verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain a first exponentiation operation result corresponding to each encryption model parameter, and specifically, for each encryption model parameter, the following steps are performed: performing a power operation on the first cryptographic model parameter based on the first verification challenge parameter to obtain a first power operation result, for example, assuming that the first verification challenge parameter is x and the first cryptographic model parameter is h, the first power operation result is hx
Step A20, performing exponentiation operation on the second verification challenge parameter and each encryption model parameter respectively to obtain a second exponentiation operation result corresponding to each encryption model parameter;
in this embodiment, the second verification challenge parameter and each encryption model parameter are respectively subjected to exponentiation operation to obtain a second exponentiation operation result corresponding to each encryption model parameter, and specifically, for each encryption model parameter, the following steps are performed: and performing power operation on the second encryption model parameter based on the second verification challenge parameter to obtain a second power operation result.
Step a30, generating each of the first zero knowledge verification results based on each of the first exponentiation results and each of the second exponentiation results.
In this embodiment, each of the first zero knowledge verification results is generated based on each of the first exponentiation results and each of the second exponentiation results, specifically, a product of the first exponentiation result and the second exponentiation result is found, and the product is taken as the first zero knowledge verification result.
Step S212, based on a preset coordinator public key and each verification random parameter, performing encryption processing on each verification model parameter to obtain each second zero knowledge verification result.
In this embodiment, it should be noted that each of the participants is a valid participant, and then the public key of the participant is consistent with the public key of the preset coordinator, the verification random parameter includes a third verification random parameter, and the third verification random parameter is obtained by calculation based on the first verification random parameter, the second verification random parameter, the first verification challenge parameter and the second verification challenge parameter, for example, assuming that the first verification random parameter is r1, the second verification random parameter is r2, the first verification challenge parameter is x1, and the second verification challenge parameter is x2, the third verification challenge parameter is
Figure BDA0002504869740000151
Additionally, the verification model parameters are calculated based on a first verification challenge parameter, a second verification challenge parameter, a current model parameter and a previous model parameter, for example, assuming that the first verification challenge parameter is x1, the second verification challenge parameter is x2, the current model parameter is m, and the previous model parameter is m0If the verification model parameter is n ═ m, then the verification model parameter is m0x1+mx2
Based on a preset coordinator public key and each verification random parameter, performing encryption processing on each verification model parameter to obtain each second zero knowledge verification result, and specifically, executing the following steps for each verification model parameter:
homomorphic encryption is performed on the verification model parameter based on a preset coordinator public key and the third verification challenge parameter to obtain the second zero knowledge verification result, for example, assuming the third verification challenge parameter
Figure BDA0002504869740000152
Wherein the first verification random parameter is r1, the second verification random parameter is r2, the first verification random parameter is r 3578A verification challenge parameter of x1, a second verification challenge parameter of x2, and a verification model parameter of n-m0x1+mx2If the public key of the coordinator is P, then
Figure BDA0002504869740000161
Further, if the participating party does not maliciously encrypt the encryption model parameters, for example, maliciously tamper with the encryption algorithm, maliciously tamper with the encrypted parameters, etc., the first zero knowledge proof result is the same as the second zero knowledge proof result, that is, the encryption model parameters provided by the participating party is the trusted model parameters, and if the participating party maliciously encrypts the encryption model parameters, the first zero knowledge proof result is different from the second zero knowledge proof result, that is, the encryption model parameters provided by the participating party is the false encryption model parameters.
In this embodiment, validity verification is performed on each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result, and then encryption processing is performed on each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result. That is, the embodiment provides a method for calculating the first zero knowledge proof result and the second zero knowledge proof result, and then after the first zero knowledge proof result and the second zero knowledge proof result are obtained through calculation, the first zero knowledge proof result and the second zero knowledge proof result are only compared to determine whether the encryption model parameters are the false encryption model parameters, so that a foundation is laid for determining the false encryption model parameters in each encryption model parameter, and a foundation is laid for solving the technical problems of low federal learning modeling efficiency and poor accuracy.
Further, referring to fig. 3, based on the first embodiment and the second embodiment in the present application, in another embodiment of the present application, the federal learning modeling method is applied to a second device, and the federal learning modeling method includes:
step B10, obtaining a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to obtain an encrypted model parameter;
in this embodiment, the federal learning modeling includes at least one federal, and in each federal, the second device performs iterative training on a local training model until a preset iteration threshold is reached, then sends model parameters of the local training model to the first device, receives aggregation parameters fed back by the first device based on the model parameters, updates the local training model based on the aggregation parameters, and uses the local training model as an initial model of the next federal until the local training model reaches a preset training end condition, where the preset training end condition includes reaching of a maximum iteration number, convergence of a loss function, and the like.
Obtaining a model training parameter and a first verification random parameter, encrypting the model training parameter based on the first verification random parameter and a preset public key to obtain an encrypted model parameter, specifically, when the local training model reaches a preset iteration threshold, extracting the model parameter of the local training model as the model training parameter, obtaining the first verification random parameter, and homomorphically encrypting the model training parameter based on the first verification random parameter and the preset public key to obtain an encrypted model parameter, for example, assuming that the model training parameter is m, and the first verification random parameter is r1If the preset public key is P, the encryption model parameter is hm=Enc(P,m,r1) And Enc is a homomorphic encryption symbol.
Wherein the model training parameters comprise current model parameters and auxiliary model parameters,
the step of obtaining model training parameters comprises:
step B11, performing iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and acquiring the current model parameters of the local training model;
in this embodiment, it should be noted that the current model parameter is a current iteration model parameter of the local training model when the current iteration model reaches a preset iteration threshold in the current federation.
And step B12, acquiring the prior model parameters of the local training model, and generating the auxiliary model parameters based on the prior model parameters.
In this embodiment, a previous model parameter of the local training model is obtained, and the auxiliary model parameter is generated based on the previous model parameter, specifically, each previous iteration model parameter of a previous round of federation corresponding to the current round of federation is obtained, and each previous iteration model parameter is weighted and averaged to obtain the auxiliary model parameter, for example, if each previous iteration model parameter is a, b, c, the weight corresponding to a is 20%, the weight corresponding to b is 30%, and the weight corresponding to c is 50%, the auxiliary model parameter m is the auxiliary model parameter m0=a*20%+b*30%+c*50%。
Step B20, generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters;
in this embodiment, it should be noted that, in a possible implementation, the preset verification challenge parameter may be calculated by the coordinator according to each previous encryption model parameter and a preset hash function sent by each participant in the previous round federation, for example, if there are 10 participants, the corresponding 10 previous encryption model parameters are freely combined, and then n results of the free combination are input to the preset hash function to obtain the verification challenge parameter x1、x2。xnAnd a specific predetermined verification challenge parameter x1、x2。xnThe generation mode and the number of the method are not limited.
Generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter and a preset verification challenge parameter, specifically, for the first verification random parameter and the second verification random parameterThe preset verification challenge parameter performs a power operation to obtain a second verification random parameter, and generates a verification model parameter based on the model training parameter and the preset verification challenge parameter, e.g., assuming that the first verification random parameter is r1The model training parameter is m, and the preset verification challenge parameter is x1And x2The verification model parameter n ═ m × x1+m*x2The second verification random parameter
Figure BDA0002504869740000181
Step B30, sending the encryption model parameter, the verification model parameter and the second verification random parameter to a first device for the first device to perform zero knowledge verification to obtain a zero knowledge verification result;
in this embodiment, the encryption model parameter, the verification model parameter, and the second verification random parameter are sent to a first device for the first device to perform zero-knowledge verification to obtain a zero-knowledge verification result, and specifically, the encryption model parameter, the verification model parameter, and the second verification random parameter are sent to a first device associated with a second device for the first device to calculate a first zero-knowledge proof result and a second zero-knowledge proof result based on the encryption model parameter, the verification model parameter, and the second verification random parameter, and determine whether the encryption model parameter is a false encryption model parameter based on the first zero-knowledge proof result and the second zero-knowledge proof result to obtain a determination result, and record the determination result in the zero-knowledge verification result, and the zero knowledge verification result comprises a determination result corresponding to each second device.
And step B40, receiving the aggregation parameters fed back by the first equipment based on the zero knowledge verification result and the encryption model parameters, and updating the local training model corresponding to the model training parameters based on the aggregation parameters until the local training model reaches the preset training end condition.
In this embodiment, the aggregation parameters fed back by the first device based on the zero knowledge verification result and the encryption model parameters are received, and the local training model corresponding to the model training parameters is updated based on the aggregation parameters until the local training model reaches a preset training end condition, specifically, after the first device obtains a zero knowledge proof result, false encryption model parameters are removed from the encryption model parameters sent by each second device based on the zero knowledge proof result to obtain each trusted model parameter, and aggregation processing is performed on each trusted model parameter, where the aggregation processing includes summing, weighting and averaging, to obtain an aggregation parameter, and the aggregation parameters are respectively fed back to each second device, and further, after the second device ends the aggregation parameters, based on a preset private key corresponding to the preset public key, decrypting the aggregation parameters to obtain decrypted aggregation parameters, updating the local training model based on the decrypted aggregation parameters, and using the updated local training model as an initial model of the next round of federation until the local training model reaches a preset training end condition, wherein the training end condition comprises reaching of maximum iteration times, convergence of a loss function and the like.
The present embodiment obtains the model training parameter and the first verification random parameter, and based on the first verification random parameter and the preset public key, encrypting the model training parameters to obtain encrypted model parameters, generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters, further sending the encryption model parameter, the verification model parameter and the second verification random parameter to a first device for the first device to perform zero knowledge verification to obtain a zero knowledge verification result, further receiving an aggregation parameter fed back by the first device based on the zero knowledge verification result and the encryption model parameter, and based on the aggregation parameter, and updating the local training model corresponding to the model training parameters until the local training model reaches a preset training ending condition. That is, this embodiment provides a federate learning modeling method based on zero knowledge certification, that is, when a model training parameter is encrypted as an encryption model parameter, a verification model parameter and a second verification random parameter are generated at the same time, and then the encryption model parameter, the verification model parameter and the second verification random parameter are sent to a first device, so that the first device performs zero knowledge verification to obtain a zero knowledge verification result, and then the first device determines and eliminates a false encryption model parameter in each second device, and then an aggregation parameter received by the second device is obtained by aggregating the first device based on a trusted encryption model parameter, and then a local training model is updated based on the aggregation parameter to complete federate learning modeling, thereby avoiding updating the local training model based on an aggregation parameter aggregated by each encryption model parameter mixed with the false encryption model parameter, the situation that the local training model is difficult to reach the preset training end condition and the local training model is determined to be low occurs, so that the efficiency and the accuracy of the federal learning modeling are improved, and the technical problems that the federal learning modeling is low in efficiency and poor in accuracy are solved.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the federal learning modeling apparatus may include: aprocessor 1001, such as a CPU, amemory 1005, and acommunication bus 1002. Thecommunication bus 1002 is used for realizing connection communication between theprocessor 1001 and thememory 1005. Thememory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). Thememory 1005 may alternatively be a memory device separate from theprocessor 1001 described above.
Optionally, the federal learning modeling apparatus may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the federated learning modeling apparatus architecture shown in FIG. 4 does not constitute a limitation of the federated learning modeling apparatus, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 4, amemory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a federal learning modeling method program. The operating system is a program that manages and controls the hardware and software resources of the Federal learning modeling device, and supports the operation of the Federal learning modeling method program as well as other software and/or programs. The network communication module is used for realizing communication among components in thememory 1005 and communication with other hardware and software in the system of the federal learning modeling method.
In the federal learning modeling apparatus shown in fig. 4, theprocessor 1001 is configured to execute the program of the federal learning modeling method stored in thememory 1005, and implement the steps of any one of the above-mentioned federal learning modeling methods.
The specific implementation of the federal learning modeling device of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
The embodiment of the present application further provides a federal learning modeling device, which is applied to the first device, and the federal learning modeling device includes:
the receiving module is used for receiving the encryption model parameters sent by each second device and the verification parameters corresponding to the encryption model parameters;
the zero knowledge verification module is used for respectively performing zero knowledge verification on each encryption model parameter based on each verification parameter so as to determine a false encryption model parameter in each encryption model parameter and obtain a zero knowledge verification result;
and the coordination module is used for coordinating each second device to carry out federated learning modeling based on the zero knowledge verification result and each encryption model parameter.
Optionally, the coordination module comprises:
the eliminating submodule is used for eliminating the false encryption model parameters from the encryption model parameters based on the zero-knowledge verification result to obtain the credible model parameters;
and the aggregation sub-module is used for aggregating the credible model parameters to obtain aggregation parameters, and feeding the aggregation parameters back to the second equipment respectively so that the second equipment can update the local training models thereof until the local training models reach the preset training end conditions.
Optionally, the zero knowledge verification module comprises:
the calculation submodule is used for respectively calculating a first zero knowledge proof result and a second zero knowledge proof result corresponding to each verification parameter;
and the zero knowledge verification submodule is used for respectively verifying whether each encryption model parameter is a false encryption model parameter or not based on each first zero knowledge proof result and each second zero knowledge proof result so as to obtain a zero knowledge verification result.
Optionally, the computation submodule includes:
the validity verification unit is used for verifying the validity of each encryption model parameter based on a preset verification challenge parameter to obtain each first zero knowledge verification result;
and the encryption unit is used for encrypting each verification model parameter based on a preset coordinator public key and each verification random parameter to obtain each second zero knowledge verification result.
Optionally, the validity verifying unit includes:
the first power operation subunit is configured to perform power operation on the first verification challenge parameter and each encryption model parameter, respectively, to obtain a first power operation result corresponding to each encryption model parameter;
a second exponentiation subunit, configured to perform exponentiation operation on the second verification challenge parameter and each encryption model parameter, respectively, to obtain a second exponentiation operation result corresponding to each encryption model parameter;
a generating subunit, configured to generate each first zero knowledge verification result based on each first power operation result and each second power operation result.
Optionally, the zero knowledge verification sub-module includes:
a comparison unit, configured to compare the first zero knowledge proof result and the second zero knowledge proof result corresponding to each encryption model parameter respectively;
a first determining unit, configured to determine that the cryptographic model parameter is the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are not consistent;
a second determining unit, configured to determine that the cryptographic model parameter is not the dummy cryptographic model parameter if the first zero knowledge proof result and the second zero knowledge proof result corresponding to the cryptographic model parameter are consistent.
The specific implementation of the federal learning modeling apparatus of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
In order to achieve the above object, this embodiment further provides a federal learning modeling apparatus, where the federal learning modeling apparatus is applied to a second device, and the federal learning modeling apparatus includes:
the encryption module is used for acquiring a model training parameter and a first verification random parameter, and encrypting the model training parameter based on the first verification random parameter and a preset public key to acquire an encrypted model parameter;
the generation module is used for generating verification model parameters and second verification random parameters based on the first verification random parameters, the model training parameters and preset verification challenge parameters;
the sending module is used for sending the encryption model parameter, the verification model parameter and the second verification random parameter to first equipment so as to enable the first equipment to carry out zero knowledge verification and obtain a zero knowledge verification result;
and the model updating module is used for receiving the aggregation parameters fed back by the first equipment based on the zero knowledge verification result and the encryption model parameters, and updating the local training model corresponding to the model training parameters based on the aggregation parameters until the local training model reaches a preset training ending condition.
Optionally, the encryption module includes:
the obtaining submodule is used for carrying out iterative training on a local training model corresponding to the model training parameters until the local training model reaches a preset iteration threshold value, and obtaining the current model parameters of the local training model;
and the generation submodule is used for acquiring the previous model parameters of the local training model and generating the auxiliary model parameters based on the previous model parameters.
The specific implementation of the federal learning modeling apparatus of the application is basically the same as that of each embodiment of the federal learning modeling method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (8)

Translated fromChinese
1.一种联邦学习建模方法,其特征在于,所述联邦学习建模方法应用于第一设备,所述联邦学习建模方法包括:1. A federated learning modeling method, wherein the federated learning modeling method is applied to the first device, and the federated learning modeling method comprises:接收各第二设备发送的加密模型参数和所述加密模型参数对应的验证参数;receiving encryption model parameters sent by each second device and verification parameters corresponding to the encryption model parameters;基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以通过验证各所述加密模型参数是否为虚假模型参数,在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果;Based on each of the verification parameters, zero-knowledge verification is performed on each of the encryption model parameters, so that by verifying whether each of the encryption model parameters is a fake model parameter, a fake encryption model parameter is determined in each of the encryption model parameters to obtain Zero-knowledge verification results;基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模;Coordinating each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters;其中,所述基于所述零知识验证结果和各所述加密模型参数,协调各所述第二设备进行联邦学习建模的步骤包括:Wherein, the step of coordinating each of the second devices to perform federated learning modeling based on the zero-knowledge verification result and each of the encryption model parameters includes:基于所述零知识验证结果,在各所述加密模型参数中剔除所述虚假加密模型参数,获得各可信模型参数;Based on the zero-knowledge verification result, the false encryption model parameters are removed from the encryption model parameters to obtain each trusted model parameter;对各所述可信模型参数进行聚合处理,获得聚合参数,并将所述聚合参数分别反馈至各所述第二设备,以供各所述第二设备更新各自的本地训练模型,直至所述本地训练模型达到预设训练结束条件。Perform aggregation processing on each of the trusted model parameters to obtain aggregation parameters, and respectively feed back the aggregation parameters to each of the second devices, so that each of the second devices can update their respective local training models until the The locally trained model reaches the preset training end condition.2.如权利要求1所述联邦学习建模方法,其特征在于,所述验证参数包括验证模型参数和验证随机参数,2. The federated learning modeling method of claim 1, wherein the verification parameters comprise verification model parameters and verification random parameters,所述基于各所述验证参数,分别对各所述加密模型参数进行零知识验证,以在各所述加密模型参数中确定虚假加密模型参数,获得零知识验证结果的步骤包括:The step of performing zero-knowledge verification on each of the encryption model parameters based on each of the verification parameters, so as to determine false encryption model parameters in each of the encryption model parameters, and obtaining a zero-knowledge verification result includes:基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各第一零知识验证结果;Based on the preset verification challenge parameters, the validity of each of the encryption model parameters is verified, and each first zero-knowledge verification result is obtained;基于预设协调方公钥和各所述验证随机参数,对各所述验证模型参数进行加密处理,获得各第二零知识验证结果;Encrypting each of the verification model parameters based on the preset coordinator public key and each of the verification random parameters to obtain each second zero-knowledge verification result;基于各所述第一零知识验证结果和各所述第二零知识验证结果,分别验证各所述加密模型参数是否为虚假加密模型参数,获得零知识验证结果。Based on each of the first zero-knowledge verification results and each of the second zero-knowledge verification results, it is respectively verified whether each of the encryption model parameters is a false encryption model parameter, and a zero-knowledge verification result is obtained.3.如权利要求2所述联邦学习建模方法,其特征在于,所述预设验证挑战参数包括第一验证挑战参数和第二验证挑战参数;3. The federated learning modeling method according to claim 2, wherein the preset verification challenge parameter comprises a first verification challenge parameter and a second verification challenge parameter;所述基于预设验证挑战参数,对各所述加密模型参数进行合法性验证,获得各所述第一零知识验证结果的步骤包括:The step of performing legality verification on each of the encryption model parameters based on the preset verification challenge parameters, and obtaining each of the first zero-knowledge verification results includes:分别将所述第一验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第一幂操作结果;Perform an exponentiation operation on the first verification challenge parameter and each of the encryption model parameters, respectively, to obtain a first exponentiation operation result corresponding to each of the encryption model parameters;分别将所述第二验证挑战参数与各所述加密模型参数进行幂操作,获得各所述加密模型参数对应的第二幂操作结果;Performing an exponentiation operation on the second verification challenge parameter and each of the encryption model parameters, respectively, to obtain a second exponentiation operation result corresponding to each of the encryption model parameters;基于各所述第一幂操作结果和各所述第二幂操作结果,生成各所述第一零知识验证结果。Each of the first zero-knowledge verification results is generated based on each of the first power operation results and each of the second power operation results.4.如权利要求2所述联邦学习建模方法,其特征在于,所述基于各所述第一零知识验证结果和各所述第二零知识验证结果,分别验证各所述加密模型参数是否为虚假加密模型参数的步骤包括:4. The federated learning modeling method according to claim 2, wherein, based on each of the first zero-knowledge verification results and each of the second zero-knowledge verification results, respectively verify whether each of the encryption model parameters is The steps to encrypt model parameters for fake include:将各所述加密模型参数对应的所述第一零知识验证结果与所述第二零知识验证结果分别进行对比;comparing the first zero-knowledge verification result corresponding to each of the encryption model parameters with the second zero-knowledge verification result;若所述加密模型参数对应的所述第一零知识验证结果和所述第二零知识验证结果不一致,则判定所述加密模型参数为所述虚假加密模型参数;If the first zero-knowledge verification result corresponding to the encryption model parameter is inconsistent with the second zero-knowledge verification result, it is determined that the encryption model parameter is the false encryption model parameter;若所述加密模型参数对应的所述第一零知识验证结果和所述第二零知识验证结果一致,则判定所述加密模型参数不为所述虚假加密模型参数。If the first zero-knowledge verification result corresponding to the encryption model parameter is consistent with the second zero-knowledge verification result, it is determined that the encryption model parameter is not the false encryption model parameter.5.一种联邦学习建模方法,其特征在于,所述联邦学习建模方法应用于第二设备,所述联邦学习建模方法包括:5. A federated learning modeling method, wherein the federated learning modeling method is applied to a second device, and the federated learning modeling method comprises:获取模型训练参数和第一验证随机参数,并基于所述第一验证随机参数和预设公钥,对所述模型训练参数进行加密处理,获得加密模型参数;Obtaining model training parameters and first verification random parameters, and encrypting the model training parameters based on the first verification random parameters and a preset public key to obtain encrypted model parameters;基于所述第一验证随机参数、所述模型训练参数和预设验证挑战参数,生成验证模型参数和第二验证随机参数;generating a verification model parameter and a second verification random parameter based on the first verification random parameter, the model training parameter and the preset verification challenge parameter;将所述加密模型参数、所述验证模型参数和所述第二验证随机参数发送至第一设备,以供所述第一设备进行零知识验证,获得零知识验证结果;sending the encryption model parameter, the verification model parameter and the second verification random parameter to the first device, so that the first device can perform zero-knowledge verification to obtain a zero-knowledge verification result;接收所述第一设备基于所述零知识验证结果和所述加密模型参数反馈的聚合参数,并基于所述聚合参数,对所述模型训练参数对应的本地训练模型进行更新,直至所述本地训练模型达到预设训练结束条件。Receive the aggregation parameter fed back by the first device based on the zero-knowledge verification result and the encryption model parameter, and update the local training model corresponding to the model training parameter based on the aggregation parameter until the local training The model reaches the preset training end condition.6.如权利要求5所述联邦学习建模方法,其特征在于,所述模型训练参数包括当前模型参数和辅助模型参数,6. The federated learning modeling method of claim 5, wherein the model training parameters include current model parameters and auxiliary model parameters,所述获取模型训练参数的步骤包括:The step of obtaining model training parameters includes:对所述模型训练参数对应的本地训练模型进行迭代训练,直至所述本地训练模型达到预设迭代次数阀值,获取所述本地训练模型的所述当前模型参数;Perform iterative training on the local training model corresponding to the model training parameters until the local training model reaches a preset number of iterations threshold, and obtain the current model parameters of the local training model;获取所述本地训练模型的在前模型参数,并基于所述在前模型参数,生成所述辅助模型参数。Obtaining previous model parameters of the local training model, and generating the auxiliary model parameters based on the previous model parameters.7.一种联邦学习建模设备,其特征在于,所述联邦学习建模设备包括:存储器、处理器以及存储在存储器上的用于实现所述联邦学习建模方法的程序,7. A federated learning modeling device, characterized in that the federated learning modeling device comprises: a memory, a processor, and a program stored in the memory for implementing the federated learning modeling method,所述存储器用于存储实现联邦学习建模方法的程序;The memory is used to store a program for implementing the federated learning modeling method;所述处理器用于执行实现所述联邦学习建模方法的程序,以实现如权利要求1至4或5至6中任一项所述联邦学习建模方法的步骤。The processor is configured to execute a program for implementing the federated learning modeling method, so as to implement the steps of the federated learning modeling method according to any one of claims 1 to 4 or 5 to 6.8.一种可读存储介质,其特征在于,所述可读存储介质上存储有实现联邦学习建模方法的程序,所述实现联邦学习建模方法的程序被处理器执行以实现如权利要求1至4或5至6中任一项所述联邦学习建模方法的步骤。8. A readable storage medium, characterized in that, a program for implementing the federated learning modeling method is stored on the readable storage medium, and the program for implementing the federated learning modeling method is executed by a processor to realize the method as claimed in the claims. Steps of the federated learning modeling method described in any one of 1 to 4 or 5 to 6.
CN202010445868.8A2020-05-222020-05-22 Federated learning modeling method, device and readable storage mediumActiveCN111598254B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202010445868.8ACN111598254B (en)2020-05-222020-05-22 Federated learning modeling method, device and readable storage medium
PCT/CN2020/135032WO2021232754A1 (en)2020-05-222020-12-09Federated learning modeling method and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010445868.8ACN111598254B (en)2020-05-222020-05-22 Federated learning modeling method, device and readable storage medium

Publications (2)

Publication NumberPublication Date
CN111598254A CN111598254A (en)2020-08-28
CN111598254Btrue CN111598254B (en)2021-10-08

Family

ID=72189770

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010445868.8AActiveCN111598254B (en)2020-05-222020-05-22 Federated learning modeling method, device and readable storage medium

Country Status (2)

CountryLink
CN (1)CN111598254B (en)
WO (1)WO2021232754A1 (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111598254B (en)*2020-05-222021-10-08深圳前海微众银行股份有限公司 Federated learning modeling method, device and readable storage medium
CN112132277A (en)*2020-09-212020-12-25平安科技(深圳)有限公司Federal learning model training method and device, terminal equipment and storage medium
CN112329010B (en)*2020-10-162025-01-28深圳前海微众银行股份有限公司 Adaptive data processing method, device, equipment and storage medium based on federated learning
CN112381000B (en)*2020-11-162024-08-27深圳前海微众银行股份有限公司Face recognition method, device, equipment and storage medium based on federal learning
CN112434818B (en)*2020-11-192023-09-26脸萌有限公司Model construction method, device, medium and electronic equipment
CN112446025B (en)*2020-11-232024-07-26平安科技(深圳)有限公司Federal learning defense method, federal learning defense device, electronic equipment and storage medium
CN112434620B (en)*2020-11-262024-03-01新奥新智科技有限公司Scene text recognition method, device, equipment and computer readable medium
CN112434619B (en)*2020-11-262024-03-26新奥新智科技有限公司Case information extraction method, apparatus, device and computer readable medium
CN112632636B (en)*2020-12-232024-06-04深圳前海微众银行股份有限公司Ciphertext data comparison result proving and verifying method and device
CN114764707B (en)*2021-01-042025-04-04中国移动通信有限公司研究院 Federated learning model training method and system
CN112860800A (en)*2021-02-222021-05-28深圳市星网储区块链有限公司Trusted network application method and device based on block chain and federal learning
CN113111124B (en)*2021-03-242021-11-26广州大学Block chain-based federal learning data auditing system and method
CN115150050B (en)*2021-03-302024-11-22中国电信股份有限公司 Security audit device, method, system and medium for federated learning
CN112949760B (en)*2021-03-302024-05-10平安科技(深圳)有限公司Model precision control method, device and storage medium based on federal learning
CN113420886B (en)*2021-06-212024-05-10平安科技(深圳)有限公司Training method, device, equipment and storage medium for longitudinal federal learning model
CN113435121B (en)*2021-06-302023-08-22平安科技(深圳)有限公司Model training verification method, device, equipment and medium based on federal learning
CN113487043B (en)*2021-07-222025-02-07深圳前海微众银行股份有限公司 Federated learning modeling optimization method, device, medium and computer program product
CN113849805A (en)*2021-09-232021-12-28国网山东省电力公司济宁供电公司Mobile user credibility authentication method and device, electronic equipment and storage medium
CN114139731B (en)*2021-12-032025-01-14深圳前海微众银行股份有限公司 Vertical federated learning modeling optimization method, equipment, medium and program product
CN114239857B (en)*2021-12-292022-11-22湖南工商大学Data right determining method, device, equipment and medium based on federal learning
CN114399378A (en)*2022-01-102022-04-26信雅达科技股份有限公司 Construction method of bank intelligent outbound dialogue system based on horizontal federated learning
CN114800545B (en)*2022-01-182023-10-27泉州华中科技大学智能制造研究院 A robot control method based on federated learning
CN114466358B (en)*2022-01-302023-10-31全球能源互联网研究院有限公司User identity continuous authentication method and device based on zero trust
CN114650128B (en)*2022-03-312024-10-11启明星辰信息技术集团股份有限公司Aggregation verification method for federal learning
CN114897177B (en)*2022-04-062024-07-23中国电信股份有限公司Data modeling method and device, electronic equipment and storage medium
CN114760023A (en)*2022-04-192022-07-15光大科技有限公司Model training method and device based on federal learning and storage medium
CN115174046B (en)*2022-06-102024-04-30湖北工业大学Federal learning bidirectional verifiable privacy protection method and system in vector space
CN115309800A (en)*2022-07-272022-11-08光大科技有限公司Method and device for statistical processing of joint tasks
CN115277197B (en)*2022-07-272024-01-16深圳前海微众银行股份有限公司 Model ownership verification methods, electronic devices, media and program products
CN115292738B (en)*2022-10-082023-01-17豪符密码检测技术(成都)有限责任公司Method for detecting security and correctness of federated learning model and data
CN116432746A (en)*2023-04-282023-07-14山东浪潮科学研究院有限公司Federal modeling method, device, equipment and medium based on prompt learning
CN117575291B (en)*2024-01-152024-05-10湖南科技大学 Data collaborative management method for federated learning based on edge parameter entropy
CN117972802B (en)*2024-03-292024-06-18苏州元脑智能科技有限公司Field programmable gate array chip, aggregation method, device, equipment and medium
CN118333192B (en)*2024-06-122024-10-01杭州金智塔科技有限公司Federal modeling method for data element circulation

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10490066B2 (en)*2016-12-292019-11-26X Development LlcDynamic traffic control
CN109165515A (en)*2018-08-102019-01-08深圳前海微众银行股份有限公司Model parameter acquisition methods, system and readable storage medium storing program for executing based on federation's study
US11606358B2 (en)*2018-09-182023-03-14Cyral Inc.Tokenization and encryption of sensitive data
CN109635462A (en)*2018-12-172019-04-16深圳前海微众银行股份有限公司Model parameter training method, device, equipment and medium based on federation's study
CN110263936B (en)*2019-06-142023-04-07深圳前海微众银行股份有限公司Horizontal federal learning method, device, equipment and computer storage medium
CN110378487B (en)*2019-07-182021-02-26深圳前海微众银行股份有限公司 Model parameter verification method, device, equipment and medium in horizontal federated learning
CN110490335A (en)*2019-08-072019-11-22深圳前海微众银行股份有限公司A kind of method and device calculating participant's contribution rate
KR20190103090A (en)*2019-08-152019-09-04엘지전자 주식회사Method and apparatus for learning a model to generate poi data using federated learning
CN110443375B (en)*2019-08-162021-06-11深圳前海微众银行股份有限公司Method and device for federated learning
CN110503207A (en)*2019-08-282019-11-26深圳前海微众银行股份有限公司 Federal learning credit management method, device, equipment and readable storage medium
CN110572253B (en)*2019-09-162023-03-24济南大学Method and system for enhancing privacy of federated learning training data
CN110908893A (en)*2019-10-082020-03-24深圳逻辑汇科技有限公司 Sandbox Mechanism of Federated Learning
CN110874484A (en)*2019-10-162020-03-10众安信息技术服务有限公司Data processing method and system based on neural network and federal learning
CN110797124B (en)*2019-10-302024-04-12腾讯科技(深圳)有限公司Model multiterminal collaborative training method, medical risk prediction method and device
CN110955907B (en)*2019-12-132022-03-25支付宝(杭州)信息技术有限公司Model training method based on federal learning
CN110991655B (en)*2019-12-172021-04-02支付宝(杭州)信息技术有限公司Method and device for processing model data by combining multiple parties
CN110912713B (en)*2019-12-202023-06-23支付宝(杭州)信息技术有限公司Method and device for processing model data by multi-party combination
CN111178524B (en)*2019-12-242024-06-14中国平安人寿保险股份有限公司Data processing method, device, equipment and medium based on federal learning
CN111598254B (en)*2020-05-222021-10-08深圳前海微众银行股份有限公司 Federated learning modeling method, device and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Analyzing Federated Learning through an Adversarial Lens;Arjun Nitin Bhagoji et al.;《arXiv》;20191125;第1-19页*
基于区块链的安全投票系统设计与实现;颜春辉 等;《通信技术》;20180831;第51卷(第8期);第1979-1989页*

Also Published As

Publication numberPublication date
CN111598254A (en)2020-08-28
WO2021232754A1 (en)2021-11-25

Similar Documents

PublicationPublication DateTitle
CN111598254B (en) Federated learning modeling method, device and readable storage medium
US12238199B2 (en)Secure multi-party computation method and apparatus, device, and storage medium
CN110087237B (en)Privacy protection method and device based on data disturbance and related components
US11349648B2 (en)Pre-calculation device, method, computer-readable recording medium, vector multiplication device, and method
CN109886417A (en) Model parameter training method, device, equipment and medium based on federated learning
TW202006615A (en)Model-based prediction method and device
US9860058B2 (en)Secret computation system, arithmetic unit, secret computation method and program
CN112749392A (en)Method and system for detecting abnormal nodes in federated learning
WO2021150238A1 (en)Remote attestation
CN107342859A (en)Anonymous authentication method and application thereof
CN111614679B (en) Federated learning qualification recovery method, device and readable storage medium
US20100281264A1 (en)Information processing apparatus, key update method, and program
CN114362958A (en)Intelligent home data security storage auditing method and system based on block chain
JP2024057554A (en)Method for providing oracle service of block chain network by using zero-knowledge proof and aggregator terminal using the same
WO2023055582A1 (en)Round optimal oblivious transfers from isogenies
US10880278B1 (en)Broadcasting in supersingular isogeny-based cryptosystems
CN1855815B (en)Systems and methods for generation and validation of isogeny-based signatures
US20250038976A1 (en)Lattice-based proxy signature method, apparatus and device, lattice-based proxy signature verification method, apparatus and device, and storage medium
Hu et al.Privacy-preserving combinatorial auction without an auctioneer
Xie et al.Accountable and secure threshold EdDSA signature and its applications
US11157612B2 (en)Secret tampering detection system, secret tampering detection apparatus, secret tampering detection method, and program
CN112769766B (en)Safe aggregation method and system for data of power edge internet of things based on federal learning
Prajapat et al.A practical convertible quantum signature scheme with public verifiability into universal quantum designated verifier signature using self-certified public keys: S. Prajapat et al.
CN117574412A (en)Multiparty privacy exchange method and device and electronic equipment
US20210367779A1 (en)Device and Method for Certifying Reliability of Public Key, and Program Therefor

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp