Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad invention. It should be further noted that, for convenience of description, only some structures, not all structures, relating to the embodiments of the present invention are shown in the drawings.
The term "federal learning" as used herein is an emerging artificial intelligence technology, and the design goal is to develop efficient machine learning among multiple parties or multiple computing nodes on the premise of ensuring data security, protecting data privacy and guaranteeing legal compliance.
The term "coordinator" as used herein is used to combine the model parameters or gradient parameters of each participant and send the results to each participant; in this embodiment, the coordinating party may be a server or an electronic device such as a computer.
The term "participant" is used herein to perform iterative computation on the model issued by the coordinator until the model converges, where the data of each participant is not shared; in this embodiment, the participating party may be a server, or may also be a computer, a tablet computer, a smart phone, or the like.
The term "Unique Identifier (uuid)" used herein may be an ASCII string in hexadecimal; illustratively, the characters may be hexadecimal ASCII strings generated after encrypting 0 to (k-1), respectively, where k is the number of participants in the federal learning system.
The term "local encryption training model" is used herein to encrypt the local training model after the overlapping of the mask for each of the participants.
For ease of understanding, the main inventive concepts of the embodiments of the present invention are briefly described.
In the prior art, a coordinator in a federated learning system encrypts a model parameter set by adopting a homomorphic encryption algorithm and then sends the encrypted model parameter set to a participant, the participant performs model training in an encrypted state by using an encrypted model and a local training sample based on a homomorphic encryption principle, and then sends an encrypted local training model obtained after calculation to the coordinator to perform model aggregation calculation for the next round of iterative training.
However, in the method in the prior art, the efficiency of training the model by each participant in a ciphertext mode is low, and the risk of overflow also exists.
The inventor considers whether model training can be performed by each participant without a ciphertext mode or not aiming at the problems, and a method for ensuring data privacy is provided so as to improve the model training efficiency of federal learning.
Based on the above thought, the inventor creatively proposes that a coordinator of the federal learning system generates unique identification codes respectively corresponding to each participant in the federal learning system, and issues each unique identification code to a matched participant, wherein the unique identification codes are used for indicating each participant to generate a matched mask; receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant; performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model; and carrying out single decryption on the complete encryption training model to obtain a target training model, realizing that multiple parties (without ciphertext mode) train the model on the premise of ensuring data safety, and greatly improving the model training efficiency in the Federal learning system.
Example one
Fig. 1 is a flowchart of a federal learning based model training method in an embodiment of the present invention, where the embodiment is applicable to a case where a model is trained by a coordinator of a federal learning system, and the method may be implemented by a federal learning based model training apparatus, which may be implemented by software and/or hardware and integrated in a computer device. Specifically, referring to fig. 1, the method specifically includes the following steps:
and 110, generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks.
The federal learning system may include a coordinating party and a plurality of participating parties (for example, 2, 4, or 10, etc., which are not limited in this embodiment), and in this embodiment, the coordinating party may be a server or an electronic device such as a computer; the participant may be a server, a computer, a tablet computer, a smart phone, or the like, and is not limited in this embodiment.
In an optional implementation manner of this embodiment, the coordinating party may generate unique identification codes corresponding to each participant in the federal learning system; wherein, the unique identification code can be a hexadecimal ASCII character string; illustratively, the characters may be hexadecimal ASCII strings generated after encrypting 0 to (k-1), respectively, where k is the number of participants in the federal learning system.
Further, each generated unique identification code may be issued to the matched participant, for example, the unique identification code a may be issued to the participant a matched therewith; and issuing the unique identification code B to the matched party B until all the unique identification codes are issued. Further, each participant receiving the unique identification code may generate a matching mask based on the received participant.
It should be noted that the unique identification code referred to in this embodiment may be used to instruct each participant to generate a matching mask, for example, a symbol identifier that may be used to identify the mask generated by each participant; meanwhile, conflict detection can be carried out in the process of generating the mask by each participant.
And 120, receiving the local encryption training models uploaded by the participants, wherein the local encryption training models are obtained by encrypting the local training models which are overlapped with the masks for the participants.
In an optional implementation manner of this embodiment, the coordinator may receive a local encryption training model uploaded by each participant, where the local encryption training model is a training model obtained by training each participant in the federal learning system and obtained by encrypting the local encryption training model according to an encryption key after superimposing a mask.
For example, if the participant a in the federal learning system trains to obtain the model a, the participant a may first superimpose a mask matched with the participant a, encrypt the model a on which the mask is superimposed by using a key stored in the participant a to obtain a local encryption training model a, and further upload the local encryption training model a to a coordinator in the federal learning system.
And step 130, performing fusion calculation on the local encryption training model according to a preset mask offset algorithm to obtain a fused complete encryption training model.
The preset mask cancellation algorithm may be an algorithm that adds all masks to 0, or another cancellation algorithm, which is not limited in this embodiment.
In an optional implementation manner of this embodiment, after receiving the local encryption training models uploaded by each participant, the coordinator may perform fusion calculation on the local encryption training models according to a preset mask elimination algorithm to obtain a fused complete encryption training model.
It should be noted that, in this embodiment, after receiving the local encryption training models uploaded by each participant, the coordinator does not decrypt each local encryption training model, but directly fuses each local encryption training model, and cancels all masks through a preset mask cancellation algorithm in the fusion process, so as to obtain the fused encryption training model that does not include masks.
And 140, carrying out single decryption on the complete encrypted training model to obtain a target training model.
In an optional implementation manner of this embodiment, after performing fusion calculation on the local encryption training model and obtaining a fused complete encryption training model, the coordinator may perform single decryption on the complete encryption training model, thereby obtaining the target training model.
Optionally, after the coordinator performs fusion calculation on the local encryption training model to obtain a fused complete encryption training model, the coordinator may perform single decryption on the complete encryption training model through a private key locally stored by the coordinator to obtain a target training model.
According to the scheme of the embodiment of the invention, a coordinator of the federal learning system generates unique identification codes respectively corresponding to all participants in the federal learning system and respectively issues the unique identification codes to the matched participants, wherein the unique identification codes are used for indicating all the participants to generate matched masks; receiving local encryption training models uploaded by all participants, wherein the local encryption training models are obtained by encrypting the local training models which are overlapped with the masks for all the participants; according to a preset mask offset algorithm, performing fusion calculation on the local encryption training model to obtain a fused complete encryption training model; the complete encryption training model is decrypted once to obtain the target training model, the problem that each participant conducts model training in a ciphertext mode is low in efficiency is solved, the model can be trained by multiple participants (without the ciphertext mode) on the premise that data safety is guaranteed, and model training efficiency in a federal learning system is greatly improved.
Example two
Fig. 2 is a flowchart of a model training method based on federal learning in the second embodiment of the present invention, which is a further refinement of the above technical solutions, and the technical solution in this embodiment may be combined with various alternatives in one or more of the above embodiments. As shown in fig. 2, the model training method based on federal learning may include the following steps:
step 210, obtaining a transmission key matched with each participant.
In an optional implementation manner of this embodiment, before generating the unique identification codes corresponding to the participants, the coordinator may first obtain the transmission key matched with each participant. It should be noted that, in this embodiment, each participant may encrypt the local model to be uploaded through the transmission key matched with the participant, and may also decrypt the received target training model.
In an optional implementation manner of this embodiment, the obtaining, by the coordinator, the transmission key matched with each participant may include: generating a public and private key pair, sending the public key to each participant, and storing the private key locally; and receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through a private key to obtain the transmission keys of each participant, and storing the transmission keys.
In a particular implementation, a coordinating party may generate a public-private key pair (e.g., a homomorphic encrypted public-private key pair) and send the public keys to the participating parties separately, while storing the private keys locally at the coordinating party; each participant can encrypt the generated transmission key according to the received public key; furthermore, each participant can encrypt the transmission key of the participant and upload the encrypted transmission key to the coordinator; after receiving the encrypted transmission keys uploaded by each participant, the coordinator can decrypt each encrypted transmission key through a local stored private key, so as to obtain the transmission keys of each participant, and store each transmission key locally.
And step 220, generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks.
And step 230, receiving the local encryption training models uploaded by the participants, wherein the local encryption training models are obtained by encrypting the local training models which are overlapped with the masks for the participants.
And 240, performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model.
The preset mask cancellation algorithm makes the calculation result after the addition of the masks be 0.
In an optional implementation manner of this embodiment, after receiving the local encryption training models uploaded by the participants, the coordinator may directly perform addition calculation on the local encryption training models to obtain a complete encryption training model. It can be understood that, in the present embodiment, the preset mask cancellation algorithm may enable a calculation result of adding masks of the local encryption training models to be 0, and then, the local encryption training models are directly subjected to addition calculation, and the obtained complete encryption training model does not include a mask.
And step 250, carrying out single decryption on the complete encryption training model to obtain a target training model.
And step 260, encrypting the target training model through the transmission key of each participant, and respectively issuing the encrypted target training model to each participant so as to update the local model of each participant.
In an optional implementation manner of this embodiment, after the coordinator decrypts the complete encrypted training model once to obtain the target training model, the coordinator may further encrypt the target training model by coordinating and sending the locally stored transmission keys of the participants, and send the encrypted target training model to each participant respectively to update the local training models of the participants.
In the scheme of the embodiment, a coordinating party generates a public and private key pair, and sends a public key to each participating party, and stores a private key locally; receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through a private key to obtain the transmission keys of each participant, and storing the transmission keys; after the complete encrypted training model is decrypted once to obtain the target training model, the target training model can be encrypted through the transmission key of each participant, and the encrypted target training model is respectively issued to each participant to update the local model of each participant, so that the training efficiency of the model is improved, and meanwhile, the training model stored locally by each participant can be updated.
EXAMPLE III
Fig. 3 is a flowchart of a method for training a model based on federal learning according to a third embodiment of the present invention, where the method is applicable to a case where a model is trained by a participant of a federal learning system, and the method can be executed by a model training apparatus based on federal learning, and the apparatus can be implemented by software and/or hardware and is integrated in a computer device. Specifically, referring to fig. 3, the method specifically includes the following steps:
and 310, receiving the unique identification code issued by the coordinator, and generating a mask matched with the unique identification code according to the unique identification code.
In an optional implementation manner of this embodiment, the participant may receive the unique identifier issued by the coordinator, and generate a mask matched with the unique identifier according to the received unique identifier.
In this embodiment, a DH key exchange algorithm and a secret sharing algorithm may be used to generate masks that match each participant; for example, the representation of the mask may be as follows:
wherein, p and g are public keys generated by a coordinator through a DH algorithm, r is a private key generated by a participant, k is the number of the participants in the Federal learning system, and when uuid is used as the public key, the private key is generated by the coordinator through the DH algorithm, and when the private key is used as the private key, the
j>uuid
iWhen a is
i=1;uuid
j<uuid
iWhen a is
i-1, i is 1 to k, j-1 to k, and i is not equal to j. In this embodiment, the masks of all the participants are eliminated when the coordinator performs model fusion calculation, and the calculation result of the fusion model is not affected.
And 320, performing iterative training according to the initial training model and the local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model.
In an optional implementation manner of this embodiment, the participant may perform iterative training according to the initial training model and data stored locally in the participant, so as to obtain a model training result; further, a mask matched with the participant can be superimposed on the model training result and encrypted, so that a local encryption training model is obtained.
For example, the local encryption training model of the participant may be as shown in the following formula:
wherein n is
iFor the number of samples stored locally by the participant,
a mask that matches the participant; w
iAnd (5) obtaining a model training result.
And 330, uploading the local encryption training model to a coordinator, and receiving a target training model issued by the coordinator.
In an optional implementation manner of this embodiment, after the participant generates the local encryption training model, the participant may upload the generated local price training model to the coordinator, and further receive the target training model issued by the coordinator, and further update the local model to the latest received target training model.
In the scheme of this embodiment, the coordinator generates a mask matched with the unique identification code by receiving the unique identification code issued by the coordinator and according to the unique identification code; performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping a mask to obtain a local encryption training model; the local encryption training model is uploaded to the coordinator, the target training model issued by the coordinator is received, each participant does not need to train the model in a ciphertext mode, the target training model can be determined quickly, and the model training efficiency in the federal learning system is greatly improved.
Example four
Fig. 4 is a flowchart of a model training method based on federal learning in a fourth embodiment of the present invention, which is a further refinement of the above technical solutions, and the technical solution in this embodiment may be combined with various alternatives in one or more of the above embodiments. As shown in fig. 4, the model training method based on federal learning may include the following steps:
step 410, receiving a public key issued by a coordinator, and generating a transmission key; and encrypting the transmission key through the public key, and uploading the encrypted transmission key to the coordinator.
In an optional implementation manner of this embodiment, each participant may receive a public key issued by the coordinator, and further generate a transmission key; for example, a random number (for example, 1234, 12ab, or abc @123, etc., which is not limited in this embodiment) may be randomly generated and used as the transmission key by the participant.
Further, the transmission key can be encrypted according to the received public key, and the encrypted transmission key is uploaded to the coordinator.
And step 420, receiving the unique identification code issued by the coordinator, and generating a mask matched with the unique identification code according to the unique identification code.
And 430, performing iterative training according to the initial training model and the local data to obtain a model training result, and encrypting the model training result after overlapping the mask to obtain a local encryption training model.
And 440, uploading the local encryption training model to a coordinator, and receiving a target training model issued by the coordinator.
Step 450, decrypting the target training model through the transmission key, and updating the local model; and continuing to perform the operation of iterative training according to the target training model and the local data.
In an optional implementation manner of this embodiment, each participant may decrypt the target training model through a transmission key matched with the participant, and update the local model through the decrypted training model; and continuing to execute the operation of iterative training according to the target training model and the local data until the training model converges.
In the scheme of this embodiment, a participant receives a public key issued by a coordinator and generates a transmission key; encrypting the transmission key through the public key, and uploading the encrypted transmission key to a coordinator; after receiving the target training model fed back by the coordinator, decrypting the target training model through the transmission key, and updating the local model; and continuously executing the iterative training operation according to the target training model and the local data, and each participant does not need to train the model in a ciphertext mode, so that the target training model can be quickly determined, and the model training efficiency in the federal learning system is greatly improved.
In order to enable those skilled in the art to better understand the model training method based on federated learning in the present embodiment, a specific example is used for description below, and fig. 5 is a timing chart of a model training method based on federated learning in a fourth embodiment of the present invention; it should be noted that fig. 5 shows only one participant in the federal learning system for convenience of description, and does not limit the embodiments of the present invention. Specifically, referring to fig. 5, the method specifically includes the following steps:
and step 510, generating a public and private key pair by utilizing a Paillier homomorphic encryption algorithm.
And 511, sending the Paillier public key to each participant, and storing the Paillier private key in the local.
And step 520, receiving and storing the Paillier public key sent by the coordinator, and generating a transmission key of the participant.
Wherein, the transmission key can be expressed as: ciKey, where i is the participant number.
And 521, encrypting the transmission key by using the Paillier public key and then sending the encrypted transmission key to the coordinator.
And step 512, decrypting by using the Paillier private key to obtain the transmission keys of all the participants, and storing.
And 513, generating a unique uuid for each participant, and respectively sending the unique uuid to the corresponding participants.
Wherein, the MD5 algorithm is utilized, for example: generating 16-system ASCII character strings after encrypting 0 to (k-1); where k is the number of participants.
And 522, generating a mask matched with the DH key exchange algorithm and the secret sharing algorithm by using the DH key exchange algorithm and the secret sharing algorithm.
Step 523, a round of iterative training is completed by using the initial model and the local sample data to obtain a local training model result.
Wherein the result of the local training model is Wi。
Step 524, adding a mask to the locally trained model, and combining the training samples to generate a final model result.
For example, assume that the model parameters of a participant are: w
iThe mask is: r
iThe number of samples for this iteration is: n is
iThen the model result that the participant finally sends to the coordinator is:
and step 525, encrypting the final model result by using the Paillier public key and then sending the encrypted model result to the coordinator.
And 514, after receiving the encrypted models and the sample numbers sent by the participants, performing model fusion calculation to obtain a fused model ciphertext.
The embodiment of the invention uses a FedAvg federal average polymerization algorithm to realize model fusion calculation, and the calculation formula is as follows:
wherein,
as a final result of model fusion, W
sIs the sum of all the participant models. According to the principle of masking, the sum of the masks of all participants is 0, i.e.
And 515, decrypting the fused model by using a locally stored private key to finally obtain the decrypted fused model.
Wherein the decrypted fusion model is
In the embodiment of the invention, the coordinator performs model fusion calculation and then decrypts, so that the ciphertext can be decrypted only once, and the decryption times are reduced.
And 516, encrypting the decrypted model by using the transmission key of each participant, and sending the encrypted model to the corresponding participant.
Step 526, after receiving the encrypted fusion model, decrypting with the locally generated transmission key, and updating the local model.
The next iteration continues until the training model converges.
For better understanding of the embodiments of the present invention, fig. 6 is a flowchart of mask generation in a fourth embodiment of the present invention, and referring to fig. 6, the mask generation process mainly includes the following steps:
and step 610, generating uuids of all the participants by using an MD5 algorithm.
Step 611, issuing uuid to corresponding participant.
And step 612, generating a public key p, g by using a DH algorithm.
Step 613, issuing p, g to the participant.
And step 620, generating a private key r.
Step 621, convert grmod p is uploaded to the coordinator.
Step 614, mixing { uuid, grmod p to the participants.
Step 622, generate mask item
According to the embodiment of the invention, the mask is added on the basis of the local model data of the participants, and the local model result added with the mask can effectively prevent the model information from being acquired by the network in the transmission process, and meanwhile, the coordination party can not analyze the local models of all the participants, so that the privacy data of the participants are protected. In addition, the masks can be eliminated when the coordinator performs model fusion calculation, so that the final result after the model fusion calculation is not influenced.
The embodiment of the invention utilizes the Paillier homomorphic encryption algorithm to realize the model fusion calculation after the encryption of the participants on the basis of the homomorphic principle at the coordinator, thereby ensuring the processing safety of model data. In addition, the coordination party performs fusion calculation on the encrypted local model of the participant and then decrypts the fused model, so that the coordination party can be ensured to decrypt the ciphertext only once, the decryption times are reduced, and the efficiency of the federal learning system is improved.
The embodiment of the invention adopts a symmetric encryption algorithm with higher efficiency to transmit the fusion model, and further improves the performance of the Federal learning system on the premise of ensuring the network security of the fusion model. In addition, the participants perform iterative training based on the decrypted fusion model, the training efficiency is higher than that of the encrypted fusion model, and the overflow risk is avoided.
EXAMPLE five
Fig. 7 is a schematic structural diagram of a model training apparatus based on federal learning in a fifth embodiment of the present invention, which (amodel training apparatus 700 based on federal learning) can execute the model training method based on federal learning involved in the foregoing embodiments. Referring to fig. 7, the apparatus includes: a unique identificationcode generating module 710, a local encryption trainingmodel receiving module 720, afusion calculating module 730 and a target trainingmodel generating module 740.
The unique identificationcode generation module 710 is configured to generate unique identification codes corresponding to the participants in the federal learning system, and issue the unique identification codes to the matched participants, where the unique identification codes are used to indicate the participants to generate matched masks;
the local encryption trainingmodel receiving module 720 is configured to receive the local encryption training models uploaded by the participants, where the local encryption training models are obtained by encrypting the local training models, to which the masks are superimposed, for the participants;
thefusion calculation module 730 is configured to perform fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and the target trainingmodel generation module 740 is configured to perform single decryption on the complete encrypted training model to obtain a target training model.
In the scheme of the embodiment, the unique identification codes corresponding to the participants in the federal learning system are generated by the unique identification code generation module and are respectively issued to the matched participants, and the unique identification codes are used for indicating the participants to generate matched masks; receiving the local encryption training models uploaded by all participants through a local encryption training model receiving module; performing fusion calculation on the local encryption training model through a fusion calculation module according to a preset mask offset algorithm to obtain a fused complete encryption training model; the complete encryption training model is decrypted once through the target training model generation module to obtain the target training model, the problem that the efficiency of model training of all participants in a ciphertext mode is low is solved, the model can be trained by multiple participants (without the ciphertext mode) on the premise of ensuring data safety, and the model training efficiency in the federal learning system is greatly improved.
Optionally, the federal learning basedmodel training device 700 further includes: the transmission password acquisition module is used for acquiring transmission keys matched with all the participants; in this embodiment, the transmission password obtaining module is specifically configured to generate a public and private key pair, send a public key to each participant, and store the private key locally; and receiving the encrypted transmission keys uploaded by each participant, decrypting each encrypted transmission key through a private key to obtain the transmission keys of each participant, and storing the transmission keys.
Optionally, the federal learning basedmodel training device 700 further includes: and the local model updating module is used for encrypting the target training model through the transmission key of each participant and respectively transmitting the encrypted target training model to each participant so as to update the local model of each participant.
Optionally, thefusion calculation module 730 is specifically configured to directly perform addition calculation on each local encryption training model to obtain a complete encryption training model;
the preset mask cancellation algorithm makes the calculation result after the addition of the masks be 0.
Optionally, the target trainingmodel generating module 740 is specifically configured to decrypt the fused encrypted training model through a local private key stored in the local area, so as to obtain the target training model.
The model training device based on the federal learning provided by the embodiment of the invention can execute the model training method based on the federal learning provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
FIG. 8 is a schematic structural diagram of a model training apparatus based on federated learning according to a sixth embodiment of the present invention; the apparatus (federal learning based model training apparatus 800) may perform the federal learning based model training method involved in the above embodiments. Referring to fig. 8, the apparatus includes: amask generation module 810, a local encryption trainingmodel generation module 820, and a local encryption training model uploadmodule 830.
Themask generating module 810 is configured to receive a unique identification code issued by a coordinator, and generate a mask matched with the unique identification code according to the unique identification code;
the local encryption trainingmodel generation module 820 is used for performing iterative training according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping a mask to obtain a local encryption training model;
the local encryption trainingmodel uploading module 830 is configured to upload the local encryption training model to the coordinator and receive the target training model sent by the coordinator.
In the scheme of this embodiment, a unique identification code issued by a coordinator is received through a mask generation module, and a mask matched with the unique identification code is generated according to the unique identification code; performing iterative training through a local encryption training model generation module according to the initial training model and local data to obtain a model training result, and encrypting the model training result after overlapping a mask to obtain a local encryption training model; the local encryption training model is uploaded to the coordinator through the local encryption training model uploading module, the target training model issued by the coordinator is received, each participant does not need to train the model in a ciphertext mode, the target training model can be rapidly determined, and the model training efficiency in the Federal learning system is greatly improved.
Optionally, the federal learning basedmodel training device 800 further includes: the transmission key generation module is used for receiving the public key issued by the coordinator and generating a transmission key; and encrypting the transmission key through the public key, and uploading the encrypted transmission key to the coordinator.
Optionally, the federal learning basedmodel training device 800 further includes: the local model updating module is used for decrypting the target training model through the transmission key and updating the local model; and continuing to perform the operation of iterative training according to the target training model and the local data.
The model training device based on the federal learning provided by the embodiment of the invention can execute the model training method based on the federal learning provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE seven
Fig. 9 is a schematic structural diagram of a computer apparatus according to a seventh embodiment of the present invention, and as shown in fig. 9, the computer apparatus includes aprocessor 90, amemory 91, aninput device 92, and anoutput device 93; the number of theprocessors 90 in the computer device may be one or more, and oneprocessor 90 is taken as an example in fig. 9; theprocessor 90, thememory 91, theinput device 92 and theoutput device 93 in the computer apparatus may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 9.
Thememory 91 serves as a computer readable storage medium, and may be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the federal learning based model training method in the embodiment of the present invention (for example, the uniqueidentifier generation module 710, the local encryption trainingmodel reception module 720, thefusion calculation module 730, and the target trainingmodel generation module 740 in the federal learning based model training apparatus shown in fig. 7, or themask generation module 810, the local encryption trainingmodel generation module 820, and the local encryption training model uploadmodule 830 in the federal learning based model training apparatus shown in fig. 8). Theprocessor 90 executes various functional applications of the computer device and data processing by executing software programs, instructions and modules stored in thememory 91, namely, implements the above-described federal learning-based model training method.
Thememory 91 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, thememory 91 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples,memory 91 may further include memory located remotely fromprocessor 90, which may be connected to a computer device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Theinput device 92 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the computer apparatus. Theoutput device 93 may include a display device such as a display screen.
Example eight
An eighth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for model training based on federal learning, and the method is applied to a coordinator of a federal learning system, and the method includes:
generating unique identification codes corresponding to all the participants in the federal learning system respectively, and issuing the unique identification codes to the matched participants respectively, wherein the unique identification codes are used for indicating all the participants to generate matched masks;
receiving a local encryption training model uploaded by each participant, wherein the local encryption training model is obtained by encrypting the local training model subjected to mask overlapping for each participant;
performing fusion calculation on the local encryption training model according to a preset mask cancellation algorithm to obtain a fused complete encryption training model;
and carrying out single decryption on the complete encryption training model to obtain a target training model.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the model training method based on federal learning provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the model training apparatus based on federal learning, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.