Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates anexemplary architecture 100 to which the disclosed method for supporting heterogeneous federated learning or apparatus for supporting heterogeneous federated learning may be applied.
As shown in fig. 1,system architecture 100 may include a federal learnedinitiator 101, a federal learnedparticipant 102, anetwork 103, and aserver 104. Network 103 is used to provide a medium for communication links between the federal learnedsponsor 101, federal learnedparticipant 102, andserver 104. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
Theinitiator 101 of federal learning and theparticipants 102 of federal learning, as well as theinitiator 101 of federal learning and theparticipants 102 of federal learning interact with aserver 104 over anetwork 103 to receive or send messages or the like. Various communication client applications, such as communication class applications for supporting federal learning, etc., may be installed on theinitiator 101 and theparticipants 102 of federal learning.
The initiator offederal learning 101 and the participants offederal learning 102 may be hardware or software. When the initiator offederal learning 101 and the participant offederal learning 102 are hardware, they can be various electronic devices having display screens and supporting federal learning, including but not limited to smart phones, tablets, laptop portable computers, desktop computers, cloud servers, and the like. When theinitiator 101 of federal learning and theparticipant 102 of federal learning are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
Server 104 may be a server that provides various services, such as a backend server that provides support for machine learning model training class applications on both theinitiator 101 andparticipants 102 of federal learning. The background server can analyze and process the received federal learning request, and send the generated federal learning subtask to thefederal learning initiator 101 and thefederal learning participant 102, and can also generate a training result of the federal learning task according to data fed back by thefederal learning initiator 101 and thefederal learning participant 102.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for supporting heterogeneous federated learning provided by the embodiments of the present disclosure is generally performed by theserver 104, and accordingly, the apparatus for supporting heterogeneous federated learning is generally disposed in theserver 104.
It should be understood that the number of federal learned initiators, federal learned participants, networks, and servers in fig. 1 are merely illustrative. There may be any number of federal learned initiators, federal learned participants, networks, and servers, as desired for implementation.
With continued reference to fig. 2, aflow 200 of one embodiment of a method for supporting heterogeneous federated learning in accordance with the present disclosure is shown. The method for supporting heterogeneous federated learning includes the following steps:
step 201, receiving a federal learning request sent by an initiator.
In this embodiment, an executing entity (such as the server 105 shown in fig. 1) of the method for supporting heterogeneous federated learning may receive the federated learning request sent by the initiator through a wired connection manner or a wireless connection manner. The federal learning request can be used for indicating the initiation of a federal learning task. The initiator is typically the initiator of federal learning. Typically, the initiator may be a party with tag data. The above federal learning request may include, for example, general information of the training data (e.g., names of data tables for storing the training data) for subsequently generating data configurations in two federal learning subtasks.
In some optional implementation manners of this embodiment, the federal learning request may further include common interface definition information. The above-mentioned generic interface definition information may be used to indicate at least one of: communication message attributes, communication identifiers and data structures to be transmitted.
In these implementations, the generic interface definition information may include definitions for various federal learning procedures that involve data that needs to be exchanged. Wherein the generic interface definition information may be used to indicate at least one of: communication message attributes, communication identifiers and data structures to be transmitted.
As an example, the above-described common interface definition information may be used to indicate a message attribute in a gRPC (Google Remote Procedure Call). The above-mentioned common interface definition information may also be used to indicate that the communication variables at each step in the federal learning are used as communication identifiers. For example, a concatenation of "/data type/current step/task id/variable name" is used as the identifier of the recipient identification variable. The above-mentioned common interface definition information can also be used to indicate the data format to be transmitted. For example, [ task id, [ homomorphic encrypted value, exponent for floating point to integer mapping ], … … ] is used as a format for data transfer between initiator and participant.
Based on the optional implementation mode, the scheme provides that the initiator realizes the customization of the standardized communication interface and the communication flow through the general interface definition information included by the federal learning request, improves the richness of the supported communication interface, and is beneficial to reducing the resource overhead caused by the unmatched interface between the federal learning participants.
In some optional implementations of this embodiment, the federal learning request may further include task allocation granularity information. The task allocation granularity information may be used to indicate granularity of task allocation, for example, dividing subtasks by the granularity of training rounds, variable update times, and the like.
In some optional implementations of this embodiment, the execution principal may receive a federal learning request sent by the initiator in response to determining that the initiator is authenticated. Wherein the initiator generally belongs to a registered user.
Based on the optional implementation mode, the scheme can support functions of registration, authentication and the like of all the participants in federal learning. Therefore, the safety of the system is ensured on the basis of supporting the federal learning of a plurality of participants.
And 202, generating at least two federal learning subtasks according to the federal learning request.
In this embodiment, the execution subject may generate at least two federal learning subtasks in various ways according to the federal learning request received instep 201. As an example, the executive body may split the federal learning task indicated by the federal learning request into at least two federal learning subtasks according to preset rules. The preset rule can be, for example, splitting the task according to the training turn indicated by the federal learning request and generating a federal learning subtask consistent with the number of participants.
In some optional implementation manners of this embodiment, based on the task allocation granularity information included in the federal learning request, the executing entity may further generate at least two federal learning subtasks according to the following steps:
firstly, a task indicated by the federated learning request is configured and analyzed, and a pipeline model is generated.
In these implementations, the executing entity may perform configuration analysis on the task indicated by the federal learning request received instep 201 to generate a pipeline model in various ways. By way of example, the execution agent may use various existing federal learning modeling management tools to perform configuration analysis on the tasks to generate the pipeline model.
And secondly, generating a federal learning subtask consistent with the task distribution granularity information according to the generated pipeline model.
In these implementations, according to the pipeline model generated in the first step, the execution subject may split or integrate the flows indicated in the pipeline model, so as to generate a federal learning subtask consistent with the task allocation granularity information.
Based on the optional implementation mode, the scheme can realize the change of the scheduling granularity of the training task by generating the federal learning subtask consistent with the task allocation granularity information. Thus, control, monitoring or statistics of internal training processes related to the Federal learning algorithm with finer granularity than the existing scheduling according to the training turns can be supported.
And step 203, respectively sending at least two federal learning subtasks to the initiator and at least one participant according to the state of at least one participant corresponding to the federal learning request.
In this embodiment, the executing entity may send the at least two federal learning subtasks generated instep 202 to the initiator and the at least one participant respectively in various manners according to the state of the at least one participant corresponding to the federal learning request. As an example, the executing entity may send only the federal learning subtask training that the state corresponding to the participant meets the preset condition, among the at least two federal learning subtasks, to the corresponding initiator and at least one participant. As yet another example, the executing entity may further send at least two federal learning subtasks to the initiator and the at least one participant in the non-downed state, respectively.
In some optional implementations of the embodiment, in response to determining that the status of at least one participant corresponding to the federal learning request is in a trainable state, the executing entity may send the at least two federal learning subtasks to the initiator and the at least one participant, respectively.
Based on the optional implementation mode, whether each participant is in a trainable state or not is verified before the federal learning subtask is issued, so that the issued federal learning subtask can be executed in time after being issued, and the success rate of training is improved.
And step 204, in response to receiving the training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data.
In this embodiment, in response to receiving the training feedback data sent by the initiator and the at least one participant, the executive body may generate the training result of the federal learning task indicated by the federal learning request in various ways based on the training feedback data. Wherein the training feedback data is generally generated based on intermediate result data transmitted between the initiator and the at least one participant via an adaptation interface.
It should be noted that, in the federal learning process, each participant trains by using local data, and then generates intermediate result data. The intermediate result data may include various data that does not expose the original data and can reflect the training situation. The intermediate result data can be transmitted between the participants through the adaptive interface so as to update the parameters of the local models of the participants. Each participant may generate training feedback data based on the training condition of the local data and the received intermediate result data of the other participants. The training feedback data may be used to indicate a training state, a training index, and the like. Each participant may also send the training feedback data to the executive.
Optionally, the intermediate result data may also include data obtained by privacy interaction (PSI) in the data alignment process, an index used for evaluating the model, and the like.
In this embodiment, in response to receiving the training feedback data sent by the initiator and the at least one participant, as an example, the executive body may determine whether the federal learning task is completed according to a training state, a training index, and the like indicated by the training feedback data, so as to generate a training result of the federal learning task indicated by the federal learning request.
In some optional implementations of this embodiment, based on common interface definition information that may be included in the federal learning request, the adaptive interface may be determined based on the common interface definition information. Therefore, the initiator realizes the self-definition of the standardized communication interface and the communication flow between the federal learning participants through the general interface definition information included in the federal learning request, improves the richness of the supported communication interface, and is beneficial to reducing the resource overhead caused by the unmatched interface between the federal learning participants.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of a method for supporting heterogeneous federated learning, in accordance with an embodiment of the present disclosure. In the application scenario of fig. 3, a user may usedevice 301 to send afederated learning request 303 toserver 302 to initiate federated learning.Server 302 may generate at least twofederated learning subtasks 304 based onfederated learning request 303. Theserver 302 may send the at least twofederal learning subtasks 304 to the initiator (e.g., the device 301) and the at least one participant, respectively, according to the status of the at least one participant (e.g., thedevices 306, 307) corresponding to the federal learning request. The initiator and the at least one participant may then perform federal learning training using their respective local data to generate intermediate outcome data. And then, the initiator and the at least one participant can transmit the generated intermediate result data through the adaptive interface. The initiator and the at least one participant may also send training feedback data generated based on the intermediate result data to theserver 302. Theserver 302 may generate training results (e.g., perform 2 nd iteration or training completion, etc.) for the federated learning task indicated by the federated learning request based on the received training feedback data.
At present, in one of the prior arts, the same set of federal learning framework is usually used for federal learning cooperation among all participants, so that in this scenario, if the used federal learning framework is different from the local architecture, additional deployment cost and low resource utilization rate are brought. In the method provided by the embodiment of the disclosure, the received federal learning request is converted into the federal learning subtask distributed to each participant, and the communication data and the communication flow are normalized through the adaptive interface, so that a technical basis is provided for data communication between heterogeneous federal learning in which the participants adopt different federal learning architectures, each participant can realize quick access of cross-platform cooperation through conversion of a local communication data format and a standard data format, and data docking between heterogeneous federal learning participants is realized.
With further reference to fig. 4, aflow 400 of yet another embodiment of a method for supporting heterogeneous federated learning is illustrated. Theflow 400 of the method for supporting heterogeneous federated learning includes the steps of:
step 401, receiving a federal learning request sent by an initiator.
In some optional implementations of this embodiment, the execution principal may receive a federal learning request sent by the initiator in response to determining that the initiator is authenticated. Wherein the initiator generally belongs to a registered user.
And step 402, generating at least two federal learning subtasks according to the federal learning request.
And step 403, respectively sending at least two federal learning subtasks to the initiator and at least one participant according to the state of at least one participant corresponding to the federal learning request.
Step 404, in response to receiving the training feedback data sent by the initiator and the at least one participant, updating a target state table according to the training feedback data; and generating a new federal learning subtask according to the target state table until the federal learning task is completed.
In this embodiment, in response to receiving the training feedback data sent by the initiator and the at least one participant, the executing entity may update the target state table according to the training feedback data. Wherein the training feedback data may be generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptation interface. The goal state table may be used to record training process data associated with the federal learning task. The training process data may include, but is not limited to, at least one of the following training rounds, training steps, model evaluation metrics, and the like. And then, according to the target state table, the execution body can continue to schedule the federal learning task to generate a new federal learning subtask until the federal learning task is completed.
In some optional implementations of this embodiment, the executing body may further continue to perform the following steps:
in a first step, in response to determining that the federal learning task failed to train, a training start point is determined according to a target state table.
In these implementations, the executive may first determine whether the federal learning task failed training through various methods. As an example, the executive may determine that the federal learning task failed training in response to determining that the received training feedback data indicates a failure to train. As an example, the executing entity may determine that the federal learning task training failed in response to detecting a failure of a network or a downtime of a participant of the federal learning task.
In response to determining that the federated learning task failed, the executive may determine a training starting point from the goal state table in various ways. As an example, the execution agent may determine the training starting point according to a preset rule (e.g., a latest time point before the abnormal condition indicated by the target state table).
And secondly, re-executing the federal learning task from the training starting point.
In these implementations, the executive may re-execute the federal learning task based on the training starting point determined in the first step.
Based on the optional implementation manner, the scheme can realize operation running-on by utilizing the training starting point determined according to the target state table, and reduces time cost and resource consumption caused by retraining from zero.
Step 401,step 402, and step 403 are respectively consistent withstep 201,step 202,step 203, and their optional implementations in the foregoing embodiments, and the above description onstep 201,step 202,step 203, and their optional implementations also applies to step 401,step 402, and step 403, which is not described herein again.
As can be seen from fig. 4, theflow 400 of the method for supporting heterogeneous federated learning in this embodiment represents a step of updating the goal state table according to the training feedback data and a step of generating a new federated learning subtask according to the goal state table. Therefore, the scheme described in the embodiment can record various data in the training process of federal learning, and accordingly, the coordination scheduling is performed on a plurality of participants, so that the functions of performing cooperative tasks, monitoring of abnormalities, troubleshooting of failure factors and the like based on the training steps are realized.
With further reference to fig. 5, as an implementation of the method shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for supporting heterogeneous federated learning, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2 or fig. 4, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 5, theapparatus 500 for supporting heterogeneous federated learning provided by the present embodiment includes a receivingunit 501, afirst generating unit 502, a distributingunit 503, and asecond generating unit 504. The receivingunit 501 is configured to receive a federal learning request sent by an initiator; afirst generating unit 502 configured to generate at least two federal learning subtasks according to the federal learning request; adistribution unit 503 configured to send at least two federal learning subtasks to the initiator and at least one participant, respectively, according to a state of the at least one participant corresponding to the federal learning request; asecond generating unit 504, configured to generate, in response to receiving training feedback data sent by the initiator and the at least one participant, a training result of the federal learning task indicated by the federal learning request based on the training feedback data, where the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptation interface.
In this embodiment, in theapparatus 500 for supporting heterogeneous federated learning: the specific processing of the receivingunit 501, thefirst generating unit 502, the distributingunit 503 and thesecond generating unit 504 and the technical effects thereof can refer to the related descriptions ofstep 201,step 202,step 203 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementation manners of this embodiment, the federal learning request may further include common interface definition information. The above-mentioned generic interface definition information may be used to indicate at least one of: communication message attributes, communication identifiers and data structures to be transmitted. The above-mentioned adaptation interface may be determined based on the common interface definition information.
In some optional implementations of this embodiment, the federal learning request may further include task allocation granularity information. Thefirst generating unit 502 may be further configured to: configuring and analyzing a task indicated by the federal learning request to generate a pipeline model; and generating a federal learning subtask consistent with the task distribution granularity information according to the generated pipeline model.
In some optional implementations of this embodiment, the distributingunit 503 may be further configured to: and in response to determining that the states of at least one participant corresponding to the federated learning request are both in a trainable state, sending at least two federated learning subtasks to the initiator and the at least one participant, respectively.
In some optional implementations of the present embodiment, thesecond generating unit 504 may be further configured to: updating a target state table according to the training feedback data, wherein the target state table is used for recording training process data related to the federal learning task; and generating a new federal learning subtask according to the target state table until the federal learning task is completed.
In some optional implementations of this embodiment, theapparatus 500 for supporting heterogeneous federated learning described above may be further configured to: in response to determining that the federal learning task failed to train, determining a training starting point according to the target state table; the federal learning task is re-executed from the training start.
In some optional implementations of this embodiment, the receivingunit 501 may be further configured to: and receiving a federal learning request sent by the initiator in response to the determination that the initiator is authenticated, wherein the initiator belongs to the registered user.
According to the device provided by the above embodiment of the disclosure, the federal learning request received by the receivingunit 501 is converted into the federal learning subtasks distributed to each participant by the distributingunit 503 through thefirst generating unit 502, and the communication data and the communication flow are normalized through the adaptive interface, so that a technical basis is provided for data communication between heterogeneous federal learning in which different federal learning architectures are adopted by the participants, each participant can realize quick access of cross-platform cooperation through conversion between a local communication data format and a standard data format, and data docking between heterogeneous federal learning participants is realized.
With further reference to FIG. 6, atiming sequence 600 of interactions between various devices in one embodiment of a system for heterogeneous federated learning is illustrated. The system for heterogeneous federal learning can include: an initiator (e.g.,device 101 shown in fig. 1) configured to send a federated learning request to a server; training based on the received federal learning subtask and local data to generate first intermediate result data; sending the first intermediate result data to a participant corresponding to the initiator through an adaptive interface; a participant (e.g.,device 102 shown in fig. 1) configured to train based on the received federal learning subtasks and local data, generating second intermediate result data; sending the second intermediate result data to the initiator through the adaptive interface; a server configured to perform an implementation of the method for supporting heterogeneous federated learning as described in the foregoing embodiments.
As shown in fig. 6, in step 601, the initiator sends a federal learning request to the server.
In this embodiment, the initiator of federal learning may send a federal learning request to a server (e.g., the executing body of the aforementioned method for supporting heterogeneous federal learning). The above federal learning request may be consistent with the corresponding description instep 201 in the foregoing embodiment, and is not described herein again.
In step 602, the server receives a federal learning request sent by the initiator.
In step 603, the server generates at least two federal learning subtasks according to the federal learning request.
In step 604, the server side sends at least two federal learning subtasks to the initiator and at least one participant respectively according to the state of at least one participant corresponding to the federal learning request.
In step 605, training is performed based on the received federal learning subtasks and local data, and the initiator generates first intermediate result data.
In this embodiment, the initiator may be trained using local data and then generate intermediate result data that can be used for transmission between the participants. The intermediate result data may include various data that does not expose the original data and can reflect the training situation.
In step 606, training is performed based on the received federated learning subtasks and local data, and the participants generate second intermediate result data.
In this embodiment, the participants may train with local data and then generate intermediate result data that can be used for transmission between the participants. The intermediate result data may include various data that does not expose the original data and can reflect the training situation.
The steps 605 and 606 may be executed in the order of executing the step 605 and then executing the step 606, may be executed in the order of executing the step 606 and then executing the step 605, or may be executed in parallel at substantially the same time, and are not limited herein.
In step 607, the initiator sends the first intermediate result data to the participant corresponding to the initiator through the adaptive interface.
In this embodiment, the initiator may send the first intermediate result data generated in step 606 to the participant corresponding to the initiator through the adaptive interface. The adaptation interface may include various data interfaces capable of supporting data transmission between each participant and the initiator.
In step 608, the participant sends the second intermediate result data to the initiator through the adaptation interface.
In this embodiment, the participant may send the second intermediate result data generated in step 607 to the participant corresponding to the initiator through the adaptive interface. The adaptation interface may include various data interfaces capable of supporting data transmission between each participant and the initiator.
And step 609, in response to receiving the training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data.
Step 602, step 603, step 604, and step 609 are respectively consistent withstep 201,step 202,step 203,step 204, and optional implementations thereof in the foregoing embodiment, and the description above forstep 201,step 202,step 203,step 204, and optional implementations thereof also applies to step 602, step 603, step 604, and step 609, and is not repeated here.
In some optional implementation manners of this embodiment, the server may further support functions including initiator, party registration, authentication, and the like. Optionally, the server may further support the functions of federal learning training scheduling, training index statistics, training state monitoring, and the like. Optionally, the server may also support taking a role of coordinator (Arbiter) in a part of the algorithm (e.g., logistic regression algorithm).
In the system for heterogeneous federated learning provided in the above embodiment of the present application, the server converts the federated learning request sent by the initiator into the federated learning subtask distributed to each participant, the initiator and the participant train based on the local data and the received federated learning subtask, and standardizes the communication data and communication flow between the participant and the initiator through the adaptive interface, so as to provide a technical basis for data communication between heterogeneous federated learning that adopts different federated learning architectures, and the participants enable each participant to realize fast access of cross-platform cooperation through conversion of the local communication data format and the standard data format, thereby realizing data docking between heterogeneous federated learning participants. And the server generates a training result of the federal learning task indicated by the federal learning request according to training feedback data obtained by the initiator and the participants based on the exchanged intermediate result, so that unified scheduling of heterogeneous federal learning is provided, technical boundaries of all roles in the system are clear, and error positioning during subsequent training abnormity is facilitated. And by means of a cloud service technology, an extensible and highly-available heterogeneous federated learning system can be provided, and compatibility of a federated learning platform is improved.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device (e.g., server 1051 of FIG. 1)700 suitable for use in implementing embodiments of the present application. The server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7,electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded fromstorage 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of theelectronic apparatus 700 are also stored. Theprocessing device 701, theROM 702, and the RAM703 are connected to each other by abus 704. An input/output (I/O)interface 705 is also connected tobus 704.
Generally, the following devices may be connected to the I/O interface 705:input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, etc.; anoutput device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like;storage 708 including, for example, magnetic tape, hard disk, etc.; and acommunication device 709. The communication means 709 may allow theelectronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates anelectronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from theROM 702. The computer program, when executed by theprocessing device 701, performs the above-described functions defined in the methods of the embodiments of the present application.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately and not be assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: receiving a federal learning request sent by an initiator; generating at least two federal learning subtasks according to the federal learning request; respectively sending at least two federal learning subtasks to an initiator and at least one participant according to the state of the at least one participant corresponding to the federal learning request; and in response to receiving training feedback data sent by the initiator and the at least one participant, generating a training result of the federal learning task indicated by the federal learning request based on the training feedback data, wherein the training feedback data is generated based on intermediate result data transmitted between the initiator and the at least one participant through the adaptive interface.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as "C", Python, or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a receiving unit, a first generating unit, a distributing unit, and a second generating unit. The names of these units do not in some cases form a limitation on the unit itself, and for example, a receiving unit may also be described as a "unit that receives a federal learning request sent by an initiator".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.