Movatterモバイル変換


[0]ホーム

URL:


CN114841016B - A multi-model federated learning method, system and storage medium - Google Patents

A multi-model federated learning method, system and storage medium
Download PDF

Info

Publication number
CN114841016B
CN114841016BCN202210581613.3ACN202210581613ACN114841016BCN 114841016 BCN114841016 BCN 114841016BCN 202210581613 ACN202210581613 ACN 202210581613ACN 114841016 BCN114841016 BCN 114841016B
Authority
CN
China
Prior art keywords
model
training
trained
models
accuracy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210581613.3A
Other languages
Chinese (zh)
Other versions
CN114841016A (en
Inventor
李纯喜
赵永祥
李从
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong UniversityfiledCriticalBeijing Jiaotong University
Priority to CN202210581613.3ApriorityCriticalpatent/CN114841016B/en
Publication of CN114841016ApublicationCriticalpatent/CN114841016A/en
Application grantedgrantedCritical
Publication of CN114841016BpublicationCriticalpatent/CN114841016B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请提供一种多模型联邦学习方法、系统及存储介质,该方法包括:服务器获取待训练的模型集合;服务器采用多模型优化分配方法,生成优化分配矩阵,以将待训练模型按照优化分配矩阵分配给不同客户端;以使各个客户端按照服务器生成的优化分配矩阵的指示下载各自对应的待训练模型,完成本轮模型训练,并将训练后的模型参数上传至服务器;服务器在预设时间内接收客户端上传的模型参数,并聚合模型参数;服务器根据聚合的模型参数,确定各个待训练模型的精度和模型训练的总轮数,对达到精度要求或训练轮数超过轮数阈值的待训练模型,结束训练,其他的待训练模型进入下一轮训练。该方案在多模型训练时有显著的训练效率提升。

The present application provides a multi-model federated learning method, system and storage medium, the method comprising: a server obtains a set of models to be trained; the server adopts a multi-model optimization allocation method to generate an optimization allocation matrix to allocate the models to be trained to different clients according to the optimization allocation matrix; so that each client downloads the corresponding model to be trained according to the instructions of the optimization allocation matrix generated by the server, completes the current round of model training, and uploads the trained model parameters to the server; the server receives the model parameters uploaded by the client within a preset time and aggregates the model parameters; the server determines the accuracy of each model to be trained and the total number of model training rounds based on the aggregated model parameters, and ends the training for the model to be trained that meets the accuracy requirements or the number of training rounds exceeds the round number threshold, and the other models to be trained enter the next round of training. This solution has a significant improvement in training efficiency during multi-model training.

Description

Multi-model federal learning method, system and storage medium
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a multi-model federal learning method, a multi-model federal learning system and a storage medium.
Background
In recent years, federal learning has attracted great attention as a distributed machine learning paradigm, and under the management of a central server (abbreviated as a server), it allows distributed client devices (abbreviated as clients) to join in AI (ARTIFICIAL INTELLIGENCE ) model training without sharing private data of each client device, so that the user privacy is protected, the data island is broken, and the training efficiency of the AI model is improved.
According to the basic federal learning system framework, a server communicates with different clients participating in federal learning training through a network, a model to be trained is distributed to each client to be trained in an iterative mode, each iterative client completes model training and uploads the trained model parameters to the server, the server aggregates the received model parameters, and if necessary, the next iteration training of one model is started until the model reaches a certain inference precision or the iteration number reaches a certain threshold.
On the basis of the framework, the prior proposal sets a fixed upper limit time T for each round of training, and designs a client selection method. In each round of training, the existing scheme firstly randomly selects a certain proportion of clients from all available clients, and then selects as many clients as possible from the randomly selected clients, wherein the actual federal training time of each round is smaller than or equal to the upper limit time T. The scheme is designed for single model training, and resources of a client cannot be fully utilized.
Based on the framework, the existing federal learning system is focused on single model training, namely, the system is only responsible for training one model at the same time. In such a system, the server is responsible for managing the federal training of the individual model, the training process is performed in an iterative manner, in each iteration, a certain number of clients are selected by the server to participate in the training of the individual model, each client trains the individual model once, and in each iteration, the server needs to wait for the slowest client of the training process to complete the training in order to complete one iteration. Thus, existing single-model federal learning systems waste a large amount of powerful client resources, the capabilities of which are underutilized because only one model can be trained per round.
Disclosure of Invention
It is an object of embodiments of the present specification to provide a multi-model federal learning method, system, and storage medium.
In order to solve the technical problems, the embodiment of the application is realized by the following steps:
in a first aspect, the present application provides a multi-model federal learning method, the method comprising:
the method comprises the steps that a server acquires a model set to be trained, wherein the model set to be trained comprises a plurality of models to be trained;
The server generates an optimal distribution matrix by adopting a multi-model optimal distribution method so as to distribute the model to be trained to different clients according to the optimal distribution matrix, so that each client downloads the corresponding model to be trained according to the indication of the optimal distribution matrix generated by the server, completes the model training of the round, and uploads the trained model parameters to the server;
the server receives the model parameters uploaded by the client in a preset time and aggregates the model parameters;
and the server determines the precision of each model to be trained and the total number of wheels of model training according to the aggregated model parameters, and finishes training on the models to be trained which meet the precision requirement or have the number of training wheels exceeding the threshold value of the number of wheels, and the other models to be trained enter the next round of training.
In one embodiment, a multi-model optimal allocation method is adopted to generate an optimal allocation matrix, which comprises the following steps:
randomly distributing the model to be trained to each client to obtain an initial distribution matrix;
calculating a corresponding initial objective function value according to the initial allocation matrix and the model precision function table;
Constructing indexes of all possible model allocation attempts to be trained based on the initial objective function values to obtain an index set;
And determining an optimal allocation matrix according to the index set.
In one embodiment, the structure of the model precision function table comprises model types, initial model precision, the number of clients used for model training and end model precision, and the creation of the model precision function table comprises the following steps:
Obtaining a mathematical model;
calculating the end model precision obtained by one round of training when different types of models have different given initial model precision and different numbers of clients are used based on the mathematical model;
And all the corresponding relations of the model types, the initial model precision, the number of used clients and the ending model precision form an initial model precision function table.
In one embodiment, the updating of the model precision function table includes:
after each time of training a round of models, the model training type during the training of the round, the initial training model precision at the beginning of the round, the training number of the clients used by the round and the ending training model precision at the ending of the round are obtained;
And updating the corresponding model precision function table according to the training type, the initial training model precision, the training quantity of the client and the final training model precision.
In one embodiment, updating the corresponding model precision function table according to the training category, the initial training model precision, the training number of the client, and the final training model precision includes:
and searching the data record closest to the training model precision of the starting training and the training quantity of the client from the records of the corresponding training categories in the model precision function table, and updating the model precision of the ending model in the data record to the model precision of the ending training.
In one embodiment, calculating the corresponding initial objective function value according to the initial allocation matrix and the model precision function table includes:
Acquiring initial training precision of any model at the beginning of the training of the present wheel;
under the condition of initial matrix allocation, determining expected inference precision according to a model precision function table and initial training precision;
And determining an initial objective function value according to the initial training precision and the expected inference precision.
In one embodiment, determining an optimal allocation matrix from the set of indices includes:
Judging whether the index set is empty or not;
if the index set is empty, outputting the current allocation matrix as an optimized allocation matrix;
If the index set is not empty, each possible allocation attempt recorded in the index set is tried to allocate the client to each model according to the allocation attempt under the current allocation matrix, and meanwhile, one or more allocated models on the client are removed so that the client can complete model training within a preset time, the allocation matrix of the trial model is recorded, and the trial objective function value corresponding to the allocation matrix of the trial model is calculated;
Selecting an allocation attempt corresponding to the largest try objective function value as an optimal allocation attempt according to all possible allocation attempts, corresponding try model allocation matrixes and try objective function values recorded in the index set, wherein the try objective function value corresponding to the optimal allocation attempt is the largest try objective function value;
if the maximum trial objective function value is smaller than or equal to the current objective function value, outputting the current allocation matrix as an optimized allocation matrix;
If the maximum trial objective function value is greater than the current objective function value, updating the current allocation matrix to be the optimal allocation matrix corresponding to the optimal allocation trial, updating the variable corresponding to the current objective function value to be the variable corresponding to the maximum trial objective function value, deleting the optimal allocation trial from the index set, and returning to judge whether the index set is empty to continue execution.
In one embodiment, the set of models to be trained includes the newly injected model to be trained and/or the model remaining to be trained after the previous round of training.
In a second aspect, the present application provides a multimodal federal learning system, the system comprising:
the system comprises a server, a server and a client, wherein the server is used for acquiring a model set to be trained, the model set to be trained comprises a plurality of models to be trained, and an optimal allocation matrix is generated by adopting a multi-model optimal allocation method and is allocated to different clients according to the optimal allocation matrix;
The system comprises a plurality of clients, a server, a plurality of data processing units and a plurality of data processing units, wherein the clients are used for downloading respective corresponding models to be trained according to the indication of an optimal allocation matrix, completing the training of the models of the present round, and uploading the trained model parameters to the server;
And determining the precision of each model to be trained and the total number of the training wheels of the model according to the aggregated model parameters, ending the training on the model to be trained which meets the precision requirement or has the number of training wheels exceeding the threshold value of the number of the wheels, and entering the next training on the other models to be trained.
In a third aspect, the present application provides a readable storage medium having stored thereon a computer program which when executed by a processor implements the multimodal federal learning method of the first aspect.
The technical scheme provided by the embodiment of the specification can be seen that the distribution of training tasks from multiple models to multiple clients is optimized based on the difference of the resources of the clients, the maximization of the overall training efficiency of the multiple models is expected to be realized, the resources of the clients can be fully utilized, and the remarkable training efficiency is improved during the training of the multiple models.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some of the embodiments described in the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a multi-model federal learning system according to the present application;
FIG. 2 is a schematic flow chart of the multi-model federal learning method provided by the present application;
FIG. 3 is a schematic flow chart of a multi-model federal learning method according to the present application;
Fig. 4 is a schematic flow chart of a multi-model optimization allocation method provided by the application.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be apparent to those skilled in the art that various modifications and variations can be made in the specific embodiments of the application described herein without departing from the scope or spirit of the application. Other embodiments will be apparent to those skilled in the art from consideration of the specification of the present application. The specification and examples of the present application are exemplary only.
As used herein, the terms "comprising," "including," "having," "containing," and the like are intended to be inclusive and mean an inclusion, but not limited to.
The "parts" in the present application are all parts by mass unless otherwise specified.
In the related art, the existing federal learning system is focused on single model training, that is, the system is only responsible for training one model at the same time. In such a system, the server is responsible for managing the federal training of the individual model, the training process is performed in an iterative manner, in each iteration, a certain number of clients are selected by the server to participate in the training of the individual model, each client trains the individual model once, and the server needs to wait for the training process to wait for the slowest client in each iteration to complete a round of training. Thus, existing single-model federal learning systems waste a large amount of powerful client resources, the capabilities of which are underutilized because only one model can be trained per round.
Based on the method, the application provides a multi-model federal learning method, which is based on an efficient multi-model federal learning system to train a plurality of models in parallel, the system optimizes the training task allocation from the multi-model to the multi-client by considering the difference of the resources of the clients, the maximization of the overall training efficiency of the multi-model is realized, the resources of the clients can be fully utilized, and the remarkable performance is improved during multi-model training.
The invention is described in further detail below with reference to the drawings and examples.
The method provided by the embodiment of the application is applied to a multi-model federal learning system, and the multi-model federal learning system refers to a system for realizing the multi-model federal learning method. FIG. 1 illustrates a schematic diagram of a multi-model federal learning system provided by an embodiment of the present application. As shown in fig. 1, the system comprises a server 11 and several clients 12 that can participate in model training. Each client 12 owns respective user data (i.e., private data) as a participant in federal learning. The server 11 communicates with clients 12 via the internet (including backbones and various types of access networks) to implement federal learning.
The client 12 may be a PC client, a mobile client, a smart vehicle client, or the like.
The server 11 manages the training of a plurality of AI models, and is responsible for managing the parallel federal learning of multiple models in a cyclic iteration manner until each model reaches its preset inference accuracy or the total number of iterative training of the model reaches a preset maximum threshold. In each iteration, the server executes a multi-model optimization allocation method, allocates a plurality of models to a plurality of randomly selected clients for parallel training, and aggregates model parameters after the client training so as to improve model inference precision and realize high-performance multi-model federal learning.
In order to realize the optimal allocation of multiple models to multiple clients, the server communicates with each client in a certain period to obtain dynamic differential information of the client, including bandwidth capability of downloading model data, models that private data can support training, calculation capability of model training, and the like, so as to dynamically estimate total time (denoted as deltaij) required by one client i to train one model j, including time required by the client to download and upload model parameters and time required by the private data to train one model, and further enable the server to perform optimal allocation calculation of multiple models to multiple clients according to the information.
Referring to fig. 2, a flow chart of a multi-model federal learning method suitable for use in embodiments of the present application is shown.
As shown in fig. 2, the multi-model federal learning method may include:
s210, acquiring a model set to be trained, wherein the model set to be trained comprises a plurality of models to be trained. The model set to be trained can comprise a newly injected model to be trained and/or a model which remains to be trained after the previous round of training.
S220, the server adopts a multi-model optimization distribution method to generate an optimization distribution matrix so as to distribute the model to be trained to different clients according to the optimization distribution matrix, so that each client downloads the corresponding model to be trained according to the indication of the optimization distribution matrix generated by the server, the model training of the round is completed, and the trained model parameters are uploaded to the server;
s230, the server receives model parameters uploaded by the client in a preset time, and aggregates the model parameters;
S240, the server determines the precision of each model to be trained and the total number of wheels of model training according to the aggregated model parameters, and the training is finished for the models to be trained which reach the precision requirement or the number of training wheels exceeds the threshold value of the number of wheels, and other models to be trained enter the next round of training.
It can be understood that model training in the multi-model federal learning method provided by the embodiment of the present application is implemented by adopting round-by-round training, and each round of training process is shown in fig. 3, and mainly includes steps of model to be trained and client preparation, model precision function table preparation, multi-model optimization allocation calculation, model parallel training, model parameter collection and aggregation, and the specific process is as follows:
Firstly, the server needs to prepare the models to be trained of the round in advance, and all models to be trained are marked as M. The server also needs to randomly select a certain number of clients, record the client set as N, and start the training round. Wherein, M comprises the model which is left after the previous training and needs to be further trained and the newly injected model to be trained.
Then, the server prepares the model precision function table for the round of multi-model allocation. The preparation process involves two operations, one is to initialize the model precision function table (i.e., creation of the model precision function table) according to the mathematical model, and the other is to update the model precision function table (i.e., update of the model precision function table) according to the actual training result of the previous round. The specific steps of creating and updating the model precision function table are described in the following embodiments.
Then, the server adopts a multi-model optimization allocation method to generate an optimization allocation matrix pi so as to allocate the |M| models to be trained (i.e. the models to be trained) of the round to the clients in the set N. According to the optimal allocation matrix pi, each client should be able to complete its undertaken model training task within a preset time T (which may be set according to actual requirements). In this allocation calculation process, the server will perform calculation of the allocation of multiple models to multiple clients according to the estimated time δij required for any client i e N to complete training of any model j e M once, and the model precision function table, and generate an optimized allocation matrix pi= { piij |i e N, j e M } for the N row x M column to allocate the m|models to be trained by the current round to clients in the set N, where element piij =1/0 indicates whether model j is/is allocated to client i.
And then, each client downloads the model to be trained which is responsible for each client according to the indication of the optimization distribution matrix pi generated by the server, and uploads the trained model parameters to the server.
And then, the server receives the model parameters uploaded by the client in the time T required by the training round (the time required by the training round is the preset time) and aggregates the model parameters.
And finally, the server determines the precision of each model to be trained and the total number of wheels for model training according to the aggregated model parameters. And (3) finishing the training process of the model or the model with the number of training rounds exceeding a certain number of times (namely, the round number threshold value can be set according to actual requirements), and keeping other models to enter the next round of training.
The structure of the model precision function table includes model type (or model class), initial model precision, the number of clients used for model training, and end model precision.
In one embodiment, the creation of the model precision function table may include:
Obtaining a mathematical model;
Calculating the model accuracy of the ending model obtained by one round of training when the model of different types is given with different initial model accuracy and different numbers of clients based on the mathematical model;
and all the corresponding relations of model types, initial model precision, the number of used clients and end model precision form an initial model precision function table.
In one embodiment, the updating of the model precision function table includes:
After each time of training a round of models, obtaining model training types during the training of the round, initial training model precision at the beginning of the round, training quantity of clients used by the round and ending training model precision at the ending of the round;
and updating the corresponding model precision function table according to the training category, the initial training model precision, the training quantity of the client and the final training model precision.
Specifically, the multi-model allocation method of the application assumes that in one federal learning round, the accuracy of inference after training of one model jGenerally, the accuracy of the model j in the initial run is inferredAnd the function of the number of clients nj to be used in this round of training, called model precision function, can be expressed as
For convenience in use during multi-model allocation calculation, the model precision function is stored in the memory of the server in the form of a function table (it will be understood that the model precision function may also be stored in other storage media or other servers, for example, and only needs to be obtained when the server is used), so that the expected model precision can be obtained according to a given parameter lookup table. In the following expression, the look-up procedure is still expressed using the form of equation (1). The structure, creation, and update of the model precision function table are described below.
The model precision function table structurally comprises basic fields of model type, current precision of model (corresponding to) The number of clients (corresponding to nj) that will be used for model training, the model accuracy after training (corresponding to)。
When using a model precision function table, the program will depend on the given model type, the current precision of the modelThe number nj of clients participating in training, the model precision function table is searched in a fuzzy manner, and the corresponding model type and the requirement are found outAnd nj is closest to the record, the trained model accuracy of which is read (corresponds to) As the desired accuracy. As above, the present application still uses symbolsTo represent the fuzzy search of the model precision function table.
The creation and updating of the model precision function table will be based on the mathematical model and the actual model training results, respectively. The table is created based on the following mathematical model derived from actual observations.
Where j represents the model, αj、βj is a model parameter, which can be measured in advance by experimental data, nj is the number of clients used for model training,Is the model inference accuracy obtained by training the most primitive model j using these clients. Given the mathematical model (2), one can calculate the model j of different types at different given initial model accuraciesAnd model accuracy expected to be achieved through one round of training under the condition of different numbers of clients njNamely:
Furthermore, model j can be calculated based on the formula (3) to obtain model accuracy through one round of training under the conditions of different given initial model accuracy and different client numbers, and the data can be stored in the memory table to form an initial model accuracy function table. The values of the current model precision fields in the table may correspond to a plurality of quantization intervals between 0 and 1, for example, 0.1, 0.2, and 0.9, and the values of the client number fields may correspond to values from 1 to |n|, with a certain integer as an interval, for example, 1, 2, and/or.
The model precision function table is continuously updated along with the actual execution result of the multi-model training so as to more accurately reflect the actual association relation among the parameters and improve the accuracy of the multi-model allocation method. Each time a round of models is actually trained, the server records the category of the model j trained in the round and the model precision at the beginning of the roundThe number of clients nj used in the round and the model accuracy at the end of the roundAnd updates the corresponding recorded values in the model precision function table with these data. Specifically, the closest is found from the records in the table corresponding to model j categoryAnd nj, updating the recorded trained model accuracy value toThereby completing the record updating of the model precision function table.
In one embodiment, a multi-model optimal allocation method is adopted to generate an optimal allocation matrix, which comprises the following steps:
randomly distributing the model to be trained to each client to obtain an initial distribution matrix;
calculating a corresponding initial objective function value according to the initial allocation matrix and the model precision function table;
Constructing indexes of all possible model allocation attempts to be trained based on the initial objective function values to obtain an index set;
And determining an optimal allocation matrix according to the index set.
Wherein, according to the initial allocation matrix and the model precision function table, calculating the corresponding initial objective function value may include:
Acquiring initial training precision of any model at the beginning of the training of the present wheel;
under the condition of initial matrix allocation, determining expected inference precision according to a model precision function table and initial training precision;
And determining an initial objective function value according to the initial training precision and the expected inference precision.
Wherein determining an optimal allocation matrix from the index set comprises:
Judging whether the index set is empty or not;
if the index set is empty, outputting the current allocation matrix as an optimized allocation matrix;
If the index set is not empty, each possible allocation attempt recorded in the index set is tried to allocate the client to each model according to the allocation attempt under the current allocation matrix, and meanwhile, one or more allocated models on the client are removed so that the client can complete model training within a preset time, the allocation matrix of the trial model is recorded, and the trial objective function value corresponding to the allocation matrix of the trial model is calculated;
Selecting an allocation attempt corresponding to the largest try objective function value as an optimal allocation attempt according to all possible allocation attempts, corresponding try model allocation matrixes and try objective function values recorded in the index set, wherein the try objective function value corresponding to the optimal allocation attempt is the largest try objective function value;
if the maximum trial objective function value is smaller than or equal to the current objective function value, outputting the current allocation matrix as an optimized allocation matrix;
If the maximum trial objective function value is greater than the current objective function value, updating the current allocation matrix to be the optimal allocation matrix corresponding to the optimal allocation trial, updating the variable corresponding to the current objective function value to be the variable corresponding to the maximum trial objective function value, deleting the optimal allocation trial from the index set, and returning to judge whether the index set is empty to continue execution.
Specifically, a round of training efficiency defining a model (the model to be trained is hereinafter referred to as a model to be trained) is defined as improvement of the inference accuracy thereof, namelyFurthermore, the application provides a multi-model optimization allocation method to solve an optimization allocation matrix pi from a multi-model to a multi-client so as to maximize the overall weighted training efficiency of all models in each round of training, namely a formula (4), thereby optimizing the overall training efficiency of the multi-model federal learning system.
Where wj is the importance factor of model j, which can be set manually, in particular if wj ≡1 of all models indicates that the priorities of all models are the same,Is the accuracy of model j at the beginning of the run (i.e. initial training accuracy),The expected inferred accuracy given the allocation matrix pi can be obtained from a query model accuracy function table, i.eWherein Nj=∑i∈Nπij, M and N are the set of models and the set of clients, respectively, to which the training of the round corresponds.
In general, the multi-model optimal allocation method provided by the application finds an optimal allocation matrix from multiple models to multiple clients by executing the following calculation process. First, a model to be trained is randomly allocated to each client, an initial allocation matrix is formed, and a corresponding objective function value F (pi) is calculated. Note that this process must ensure that each client is able to do its own model training within a specified time T. The optimal model assignments are then iteratively found in a greedy manner, each iteration being based on pi at that time, selecting an attempt from all possible model assignment attempts to maximize the objective function value in the present round of training until no attempt is made to further increase the objective function value. Wherein a model assignment attempt refers to assigning a model to a client and simultaneously (if necessary) culling one or more already assigned models on the client to enable the client to complete its model training in time T, wherein culling models is also an iterative process of culling one model at a time that minimizes the objective function loss until the remaining models can be trained in T.
FIG. 4 shows a specific flow of the multi-model optimal allocation method. The input parameters needed by the method comprise the time upper limit T (namely preset duration) of each round, a set N of clients participating in the round of training, a set M of models to be trained, the total time deltaij needed by any client i epsilon N to train any model j epsilon M, the weight parameter wj of each model and an initial distribution matrix pi with elements of all 0. The method generates a final optimal allocation matrix by performing the following steps.
And 1, initializing model allocation, and recording an allocation result to an initial allocation matrix pi. The models are randomly assigned to the individual clients such that each client has no extra time to train more models, i.e., for any client i e N, the relationship is satisfied
And 2, calculating the value of F (pi) according to an initial allocation matrix pi and storing the value into a variable u according to a formula (4), constructing indexes (i, j) of all possible model allocation attempts, and storing the indexes (i, j) into a set Γ so as to facilitate subsequent iterative operation, wherein each element (i, j) of Γ indicates a model j which can be allocated but is not allocated to a client i currently.
And 3, judging whether the index set Γ is empty or not. If the allocation matrix is empty, outputting the allocation matrix pi as a final allocation result, namely, optimizing the allocation matrix, ending the method, and if the allocation matrix pi is empty, executing the next step, and entering a loop iteration process.
Step 4, attempting to assign model j to client i under the current assignment matrix pi for each possible assignment attempt (i, j) recorded in index set Γ, while (if necessary) rejecting one or more assigned models on client j to enable client j to complete training of all models it undertakes within time T, wherein rejecting is also an iterative process of rejecting one assigned model from the client at a time and minimizing loss of objective function values until the remaining models can be trained within T, recording the model assignment matrix resulting from the above operationsCalculating objective function valueAnd stores the variable u(i,j).
Step 5-all possible allocation attempts (i, j) to be recorded in the index set Γ, and the corresponding allocation matrix resulting from step 4And objective function value (u(i,j)), selecting the allocation attempt with the largest objective function value as the best allocation attempt (i*,j*), i.e., (i*,j*)=argmax(i,j)∈Γ(i,j)).
Step 6, judging the objective function value corresponding to the selected optimal allocation attempt (i*,j*)Whether or not the target function value u is larger than the target function value u corresponding to the current allocation matrix pi, ifThe cyclic process is terminated, the current allocation matrix pi is output as a final result, the method is ended, and otherwise, the next step is executed.
Step 7, updating pi to the allocation matrix corresponding to the optimal allocation attempt (i*,j*)Simultaneously updating the objective function value variable u to beAnd delete this selected (i*,j*) from Γ. Then, the process returns to step 3 to continue execution.
The multi-model federation learning method provided by the embodiment of the application can train a plurality of models in parallel, optimize the allocation of training tasks from the plurality of models to the plurality of clients based on the difference of the resources of the clients, expectedly realize the maximization of the overall training efficiency of the multi-model, fully utilize the resources of the clients and remarkably improve the training efficiency during the multi-model training.
With continued reference to FIG. 1, a schematic diagram of a multi-model federal learning system is shown, as described in accordance with one embodiment of the present application.
As shown in fig. 1, the multimodal federal learning system may include:
The server 11 is used for acquiring a model set to be trained, wherein the model set to be trained comprises a plurality of models to be trained, generating an optimal allocation matrix by adopting a multi-model optimal allocation method, and allocating the models to be trained to different clients according to the optimal allocation matrix;
The clients 12 are used for receiving the optimal allocation matrix sent by the server, downloading respective corresponding models to be trained according to the indication of the optimal allocation matrix, completing the model training of the round, and uploading the trained model parameters to the server, wherein each client is required to complete respective model training tasks within preset time according to the optimal allocation matrix;
And the server 11 is further configured to receive the model parameters uploaded by the client and aggregate the model parameters within a preset time, determine the accuracy of each model to be trained and the total number of rounds of model training according to the aggregated model parameters, end training for the model to be trained that meets the accuracy requirement or has the number of training rounds exceeding the threshold of the number of rounds, and enter the next round of training for the other models to be trained.
Optionally, the server 11 is further configured to:
randomly distributing the model to be trained to each client to obtain an initial distribution matrix;
calculating a corresponding initial objective function value according to the initial allocation matrix and the model precision function table;
Constructing indexes of all possible model allocation attempts to be trained based on the initial objective function values to obtain an index set;
And determining an optimal allocation matrix according to the index set.
Optionally, the model precision function table comprises a model type, an initial model precision, the number of clients used for model training and an end model precision, and the server 11 is further configured to create the model precision function table, including:
Obtaining a mathematical model;
calculating the end model precision obtained by one round of training when different types of models have different given initial model precision and different numbers of clients are used based on the mathematical model;
And all the corresponding relations of the model types, the initial model precision, the number of used clients and the ending model precision form an initial model precision function table.
Optionally, the server 11 is further configured to update a model precision function table, including:
after each time of training a round of models, the model training type during the training of the round, the initial training model precision at the beginning of the round, the training number of the clients used by the round and the ending training model precision at the ending of the round are obtained;
And updating the corresponding model precision function table according to the training type, the initial training model precision, the training quantity of the client and the final training model precision.
Optionally, the server 11 is further configured to:
and searching the data record closest to the training model precision of the starting training and the training quantity of the client from the records of the corresponding training categories in the model precision function table, and updating the model precision of the ending model in the data record to the model precision of the ending training.
Optionally, the server 11 is further configured to:
Acquiring initial training precision of any model at the beginning of the training of the present wheel;
under the condition of initial matrix allocation, determining expected inference precision according to a model precision function table and initial training precision;
And determining an initial objective function value according to the initial training precision and the expected inference precision.
Optionally, the server 11 is further configured to:
Judging whether the index set is empty or not;
if the index set is empty, outputting the current allocation matrix as an optimized allocation matrix;
If the index set is not empty, each possible allocation attempt recorded in the index set is tried to allocate the client to each model according to the allocation attempt under the current allocation matrix, and meanwhile, one or more allocated models on the client are removed so that the client can complete model training within a preset time, the allocation matrix of the trial model is recorded, and the trial objective function value corresponding to the allocation matrix of the trial model is calculated;
Selecting an allocation attempt corresponding to the largest try objective function value as an optimal allocation attempt according to all possible allocation attempts, corresponding try model allocation matrixes and try objective function values recorded in the index set, wherein the try objective function value corresponding to the optimal allocation attempt is the largest try objective function value;
if the maximum trial objective function value is smaller than or equal to the current objective function value, outputting the current allocation matrix as an optimized allocation matrix;
If the maximum trial objective function value is greater than the current objective function value, updating the current allocation matrix to be the optimal allocation matrix corresponding to the optimal allocation trial, updating the variable corresponding to the current objective function value to be the variable corresponding to the maximum trial objective function value, deleting the optimal allocation trial from the index set, and returning to judge whether the index set is empty to continue execution.
Optionally, the model set to be trained includes a newly injected model to be trained and/or a model to be trained remaining after the previous training round.
The embodiment of the method can be implemented by the multi-model federal learning system provided in the present embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
In another aspect, the present application also provides a storage medium, which may be a storage medium included in the foregoing apparatus in the foregoing embodiment, or may be a storage medium that exists alone and is not assembled into a device. The storage medium stores one or more programs for use by one or more processors in performing the multimodal federal learning method described in the present application.
Storage media, including both permanent and non-permanent, removable and non-removable media, may be implemented in any method or technology for storage of information. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.

Claims (9)

Translated fromChinese
1.一种多模型联邦学习方法,其特征在于,所述方法包括:1. A multi-model federated learning method, characterized in that the method comprises:服务器获取待训练的模型集合;所述待训练的模型集合中包括若干待训练模型;The server obtains a set of models to be trained; the set of models to be trained includes a plurality of models to be trained;所述服务器采用多模型优化分配方法,生成优化分配矩阵,以将所述待训练模型按照所述优化分配矩阵分配给不同客户端;以使各个所述客户端按照所述服务器生成的所述优化分配矩阵的指示下载各自对应的待训练模型,完成本轮模型训练,并将训练后的模型参数上传至所述服务器;其中,按照所述优化分配矩阵,每个所述客户端需在预设时间内完成各自的模型训练任务;The server adopts a multi-model optimization allocation method to generate an optimization allocation matrix to allocate the models to be trained to different clients according to the optimization allocation matrix; so that each of the clients downloads the corresponding models to be trained according to the instructions of the optimization allocation matrix generated by the server, completes the current round of model training, and uploads the trained model parameters to the server; wherein, according to the optimization allocation matrix, each of the clients needs to complete its own model training task within a preset time;所述服务器在所述预设时间内接收所述客户端上传的所述模型参数,并聚合模型参数;The server receives the model parameters uploaded by the client within the preset time and aggregates the model parameters;所述服务器根据聚合的所述模型参数,确定各个所述待训练模型的精度和模型训练的总轮数,对达到精度要求或训练轮数超过轮数阈值的待训练模型,结束训练,其他的待训练模型进入下一轮训练;The server determines the accuracy of each of the models to be trained and the total number of rounds of model training according to the aggregated model parameters, and terminates the training of the models to be trained that meet the accuracy requirements or the number of training rounds exceeds the round number threshold, and the other models to be trained enter the next round of training;其中,所述采用多模型优化分配方法,生成优化分配矩阵,包括:The method of using a multi-model optimization allocation method to generate an optimization allocation matrix includes:将所述待训练模型随机分配给各个客户端,得到初始分配矩阵;Randomly assign the model to be trained to each client to obtain an initial assignment matrix;根据所述初始分配矩阵及模型精度函数表,计算对应的初始目标函数值;Calculate the corresponding initial objective function value according to the initial allocation matrix and the model accuracy function table;基于所述初始目标函数值,构造所有可能的待训练模型分配尝试的索引,得到索引集合;Based on the initial objective function value, construct indexes of all possible allocation attempts of the model to be trained to obtain an index set;根据所述索引集合,确定所述优化分配矩阵。The optimized allocation matrix is determined according to the index set.2.根据权利要求1所述的方法,其特征在于,所述模型精度函数表的结构包括:模型类型、初始模型精度、模型训练所使用的客户端数量及结束模型精度;所述模型精度函数表的创建包括:2. The method according to claim 1 is characterized in that the structure of the model accuracy function table includes: model type, initial model accuracy, number of clients used in model training and end model accuracy; the creation of the model accuracy function table includes:获取数学模型;Obtaining mathematical models;基于所述数学模型,计算不同类型的模型在不同给定初始模型精度和所使用不同数量的客户端时,经过一轮训练得到的结束模型精度;Based on the mathematical model, calculating the final model accuracy of different types of models after one round of training when different initial model accuracies and different numbers of clients are used;所有所述模型类型、初始模型精度、所使用的客户端数量、结束模型精度的对应关系,形成初始的所述模型精度函数表。The correspondence between all the model types, initial model accuracy, number of clients used, and final model accuracy forms the initial model accuracy function table.3.根据权利要求2所述的方法,其特征在于,所述模型精度函数表的更新包括:3. The method according to claim 2, characterized in that the updating of the model accuracy function table comprises:每次训练完一轮模型后,获取本轮训练时的模型训练类型、本轮开始时的初始训练模型精度、本轮所用客户端的训练数量、本轮结束时的结束训练模型精度;After each round of model training, obtain the model training type during this round of training, the initial training model accuracy at the beginning of this round, the number of clients used in this round of training, and the final training model accuracy at the end of this round;根据所述训练类型、所述初始训练模型精度、所述客户端的训练数量、所述结束训练模型精度,更新对应的所述模型精度函数表。According to the training type, the initial training model accuracy, the training quantity of the client, and the end training model accuracy, the corresponding model accuracy function table is updated.4.根据权利要求3所述的方法,其特征在于,所述根据所述训练类别、所述初始训练模型精度、所述客户端的训练数量、所述结束训练模型精度,更新对应的所述模型精度函数表,包括:4. The method according to claim 3, characterized in that the updating of the corresponding model accuracy function table according to the training category, the initial training model accuracy, the training quantity of the client, and the end training model accuracy comprises:从所述模型精度函数表中对应所述训练类别的记录中,查找与所述初始训练模型精度、所述客户端的训练数量最接近的数据记录,将所述数据记录中所述结束模型精度更新为所述结束训练模型精度。From the records corresponding to the training category in the model accuracy function table, find the data record closest to the initial training model accuracy and the training quantity of the client, and update the end model accuracy in the data record to the end training model accuracy.5.根据权利要求4所述的方法,其特征在于,所述根据所述初始分配矩阵及模型精度函数表,计算对应的初始目标函数值,包括:5. The method according to claim 4, characterized in that the calculation of the corresponding initial objective function value according to the initial allocation matrix and the model accuracy function table comprises:获取任一模型本轮训练开始时的初始训练精度;Get the initial training accuracy of any model at the beginning of this round of training;在所述初始分配矩阵的条件下,根据所述模型精度函数表及所述初始训练精度,确定预期推断精度;Under the condition of the initial allocation matrix, determining the expected inference accuracy according to the model accuracy function table and the initial training accuracy;根据所述初始训练精度及所述预期推断精度,确定所述初始目标函数值。The initial objective function value is determined according to the initial training accuracy and the expected inference accuracy.6.根据权利要求1所述的方法,其特征在于,所述根据所述索引集合,确定所述优化分配矩阵,包括:6. The method according to claim 1, characterized in that the step of determining the optimized allocation matrix according to the index set comprises:判断所述索引集合是否为空;Determine whether the index set is empty;若所述索引集合为空,则输出当前分配矩阵为优化分配矩阵;If the index set is empty, outputting the current allocation matrix as an optimized allocation matrix;若所述索引集合不为空,则对所述索引集合中记录的每种可能的分配尝试,在当前的分配矩阵下,尝试按照所述分配尝试为各个模型分配客户端,同时,剔除所述客户端上一个或多个已分配的模型,以使所述客户端能在所述预设时间内完成模型训练,记录尝试模型分配矩阵,并计算所述尝试模型分配矩阵对应的尝试目标函数值;If the index set is not empty, for each possible allocation attempt recorded in the index set, under the current allocation matrix, try to allocate clients to each model according to the allocation attempt, and at the same time, remove one or more allocated models on the client so that the client can complete model training within the preset time, record the attempted model allocation matrix, and calculate the attempted objective function value corresponding to the attempted model allocation matrix;根据所述索引集合中记录的所有可能的分配尝试及对应的所述尝试模型分配矩阵、尝试目标函数值,选择最大的尝试目标函数值对应的分配尝试作为最佳分配尝试,所述最佳分配尝试对应的尝试目标函数值为最大尝试目标函数值;According to all possible allocation attempts recorded in the index set and the corresponding allocation matrix of the attempt model and the attempt objective function value, select the allocation attempt corresponding to the maximum attempt objective function value as the best allocation attempt, and the attempt objective function value corresponding to the best allocation attempt is the maximum attempt objective function value;若所述最大尝试目标函数值小于或等于当前目标函数值,则输出所述当前分配矩阵为所述优化分配矩阵;If the maximum attempted objective function value is less than or equal to the current objective function value, outputting the current allocation matrix as the optimized allocation matrix;若所述最大尝试目标函数值大于所述当前目标函数值,将当前分配矩阵更新为所述最佳分配尝试对应的最佳分配矩阵,更新所述当前目标函数值对应的变量为所述最大尝试目标函数值对应的变量,并从所述索引集合中删除所述最佳分配尝试,返回所述判断所述索引集合是否为空继续执行。If the maximum attempt objective function value is greater than the current objective function value, update the current allocation matrix to the optimal allocation matrix corresponding to the optimal allocation attempt, update the variables corresponding to the current objective function value to the variables corresponding to the maximum attempt objective function value, delete the optimal allocation attempt from the index set, and return to the step of determining whether the index set is empty to continue execution.7.根据权利要求1-6任一项所述的方法,其特征在于,所述待训练的模型集合包括新注入的待训练模型和/或上一轮训练后剩余还需训练的模型。7. The method according to any one of claims 1-6 is characterized in that the set of models to be trained includes newly injected models to be trained and/or models that remain to be trained after the previous round of training.8.一种多模型联邦学习系统,其特征在于,所述系统包括:8. A multi-model federated learning system, characterized in that the system comprises:服务器,用于获取待训练的模型集合;所述待训练的模型集合中包括若干待训练模型;并采用多模型优化分配方法,生成优化分配矩阵,将所述待训练模型按照所述优化分配矩阵分配给不同客户端;The server is used to obtain a set of models to be trained; the set of models to be trained includes a plurality of models to be trained; and a multi-model optimization allocation method is used to generate an optimization allocation matrix, and the models to be trained are allocated to different clients according to the optimization allocation matrix;其中,所述采用多模型优化分配方法,生成优化分配矩阵,包括:The method of using a multi-model optimization allocation method to generate an optimization allocation matrix includes:将所述待训练模型随机分配给各个客户端,得到初始分配矩阵;根据所述初始分配矩阵及模型精度函数表,计算对应的初始目标函数值;基于所述初始目标函数值,构造所有可能的待训练模型分配尝试的索引,得到索引集合;根据所述索引集合,确定所述优化分配矩阵;The model to be trained is randomly assigned to each client to obtain an initial assignment matrix; the corresponding initial objective function value is calculated according to the initial assignment matrix and the model accuracy function table; based on the initial objective function value, the indexes of all possible assignment attempts of the model to be trained are constructed to obtain an index set; according to the index set, the optimized assignment matrix is determined;若干客户端,用于按照所述优化分配矩阵的指示下载各自对应的待训练模型,完成本轮模型训练,并将训练后的模型参数上传至所述服务器;其中,按照所述优化分配矩阵,每个所述客户端需在预设时间内完成各自的模型训练任务;Several clients are used to download the corresponding models to be trained according to the instructions of the optimization allocation matrix, complete the current round of model training, and upload the trained model parameters to the server; wherein, according to the optimization allocation matrix, each of the clients needs to complete its own model training task within a preset time;所述服务器,还用于在所述预设时间内接收所述客户端上传的所述模型参数,并聚合模型参数;并根据聚合的所述模型参数,确定各个所述待训练模型的精度和模型训练的总轮数,对达到精度要求或训练轮数超过轮数阈值的待训练模型,结束训练,其他的待训练模型进入下一轮训练。The server is also used to receive the model parameters uploaded by the client within the preset time and aggregate the model parameters; and determine the accuracy of each model to be trained and the total number of model training rounds based on the aggregated model parameters, and end the training for the model to be trained that meets the accuracy requirement or the number of training rounds exceeds the round number threshold, and the other models to be trained enter the next round of training.9.一种可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7中任一所述的多模型联邦学习方法。9. A readable storage medium having a computer program stored thereon, characterized in that when the program is executed by a processor, the multi-model federated learning method as described in any one of claims 1 to 7 is implemented.
CN202210581613.3A2022-05-262022-05-26 A multi-model federated learning method, system and storage mediumActiveCN114841016B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210581613.3ACN114841016B (en)2022-05-262022-05-26 A multi-model federated learning method, system and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210581613.3ACN114841016B (en)2022-05-262022-05-26 A multi-model federated learning method, system and storage medium

Publications (2)

Publication NumberPublication Date
CN114841016A CN114841016A (en)2022-08-02
CN114841016Btrue CN114841016B (en)2024-12-20

Family

ID=82572717

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210581613.3AActiveCN114841016B (en)2022-05-262022-05-26 A multi-model federated learning method, system and storage medium

Country Status (1)

CountryLink
CN (1)CN114841016B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN115758151A (en)*2022-11-222023-03-07上海交通大学 Joint diagnosis model building method, photovoltaic module fault diagnosis method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106095942B (en)*2016-06-122018-07-27腾讯科技(深圳)有限公司Strong variable extracting method and device
US11609760B2 (en)*2018-02-132023-03-21Shanghai Cambricon Information Technology Co., LtdComputing device and method
CN108846095A (en)*2018-06-152018-11-20联想(北京)有限公司A kind of data processing method and device
CN112085124B (en)*2020-09-272022-08-09西安交通大学Complex network node classification method based on graph attention network
CN112700031B (en)*2020-12-122023-03-31同济大学XGboost prediction model training method for protecting multi-party data privacy
CN112464269B (en)*2020-12-142024-09-24德清阿尔法创新研究院Data selection method in federal learning scene
CN113642707B (en)*2021-08-122023-08-18深圳平安智汇企业信息管理有限公司Model training method, device, equipment and storage medium based on federal learning
CN114362992A (en)*2021-11-232022-04-15北京信息科技大学Hidden Markov attack chain prediction method and device based on SNORT log

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"多模型联邦学习的资源优化分配";李从;《中国优秀硕士学位论文全文数据库 信息科技辑》;20230715(第2023年07期);第I140-24页*

Also Published As

Publication numberPublication date
CN114841016A (en)2022-08-02

Similar Documents

PublicationPublication DateTitle
KR102107115B1 (en)Distributed computing resources sharing system and computing apparatus thereof based on block chain system supporting smart contract
CN113485826B (en) An edge server load balancing method and system
CN113141317A (en)Streaming media server load balancing method, system, computer equipment and terminal
Keshk et al.Cloud task scheduling for load balancing based on intelligent strategy
CN117707795B (en) Edge-end collaborative reasoning method and system based on graph model partitioning
CN111027709B (en)Information recommendation method and device, server and storage medium
CN110533437B (en)Advertisement delivery budget allocation method and device
CN110633053B (en)Storage capacity balancing method, object storage method and device
CN114841016B (en) A multi-model federated learning method, system and storage medium
CN117670005A (en) Supercomputing Internet multi-objective workflow optimization method and system based on ant colony algorithm
CN111581442A (en)Method and device for realizing graph embedding, computer storage medium and terminal
CN117521782A (en) Sparse and robust federated learning methods, federated learning systems and servers
CN114647493B (en) A cloud task scheduling method and device based on immune annealing algorithm
CN119204083B (en) A cooperative model incentive method and system based on Stackelberg game
CN119212105B (en) A task perception and intelligent scheduling method for converged networks
US20230125509A1 (en)Bayesian adaptable data gathering for edge node performance prediction
CN118567858A (en) Service quality optimization method and system in metaverse environment based on deep reinforcement learning
CN109784687B (en)Smart cloud manufacturing task scheduling method, readable storage medium and terminal
CN109767094B (en)Smart cloud manufacturing task scheduling device
CN117557870A (en)Classification model training method and system based on federal learning client selection
CN116862025A (en)Model training method, system, client and server node, electronic device and storage medium
Nourmohammadi et al.BlockFed: A novel federated learning framework based on hierarchical aggregation
CN116450658A (en) Block storage method and device based on node cluster gain maximization
CN114064266A (en) Cloud computing resource scheduling method based on multi-swarm and multi-objective ant colony algorithm
Singh et al.Maximizing Utility and Quality in Smart Healthcare with Incentive Driven Hierarchical Federated Learning

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp