Movatterモバイル変換


[0]ホーム

URL:


CN113486332A - Computing node, privacy computing system and loading method of algorithm engine - Google Patents

Computing node, privacy computing system and loading method of algorithm engine
Download PDF

Info

Publication number
CN113486332A
CN113486332ACN202110831987.1ACN202110831987ACN113486332ACN 113486332 ACN113486332 ACN 113486332ACN 202110831987 ACN202110831987 ACN 202110831987ACN 113486332 ACN113486332 ACN 113486332A
Authority
CN
China
Prior art keywords
algorithm engine
computing
specified
loaded
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110831987.1A
Other languages
Chinese (zh)
Other versions
CN113486332B (en
Inventor
王一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huakong Tsingjiao Information Technology Beijing Co Ltd
Original Assignee
Huakong Tsingjiao Information Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huakong Tsingjiao Information Technology Beijing Co LtdfiledCriticalHuakong Tsingjiao Information Technology Beijing Co Ltd
Priority to CN202110831987.1ApriorityCriticalpatent/CN113486332B/en
Publication of CN113486332ApublicationCriticalpatent/CN113486332A/en
Application grantedgrantedCritical
Publication of CN113486332BpublicationCriticalpatent/CN113486332B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application discloses a method for loading a computing node, a privacy computing system and an algorithm engine, which comprises the following steps: the data input module is used for receiving ciphertext data sent by the data source node; the control module is used for receiving task configuration information of a specified task sent by the control node; the computing module is used for calling a specified algorithm engine from the loaded algorithm engines to compute specified tasks based on the ciphertext data to obtain a computing result; the data output module is used for sending the calculation result to the result receiver; and the control module is also used for dynamically loading the new algorithm engine dynamic library file when receiving the new algorithm engine dynamic library file and reporting the capacity information indicating that the new algorithm engine is loaded to the control node. By adopting the scheme, the cost of the user for using the privacy algorithm provided by the third party is reduced, and the implementation difficulty is also reduced.

Description

Computing node, privacy computing system and loading method of algorithm engine
Technical Field
The application relates to the technical field of privacy computation, in particular to a computing node, a privacy computation system and a loading method of an algorithm engine.
Background
For the privacy computing technology, in practical applications, there are many different privacy computing products, which may be privacy computing platforms or privacy computing systems, and each privacy computing product may have an autonomous technical route or an algorithm library for a certain scene.
Although there are many privacy computing products for users who need to use the privacy computing service, algorithm engines of the products cannot be adapted to each other, and for users, a certain scenario may require algorithm a provided by privacy computing product a to solve one kind of problem and algorithm B provided by privacy computing product B to solve another kind of problem. However, because the privacy algorithms provided by the privacy computing product a and the privacy computing product B cannot run together on the same platform for interface standardization reasons or key technology protection reasons, if the algorithms provided by the two privacy computing products are used to solve different problems, it may be necessary for a user to install the two privacy computing products at the same time and integrate the two privacy computing products into their own system.
As can be seen from the above-mentioned related art, there are problems that the cost is high and the implementation difficulty is large when the user wants to use the privacy algorithm provided by the third party.
Disclosure of Invention
The embodiment of the application provides a computing node, a privacy computing system and a loading method of an algorithm engine, which are used for solving the problems that in the prior art, when a user wants to use a privacy algorithm provided by a third party, the cost is high, and the implementation difficulty is high.
An embodiment of the present application provides a computing node, including:
the data input module is used for receiving ciphertext data sent by a data source node, wherein the ciphertext data is required by calculation aiming at a specified task;
the control module is used for receiving task configuration information of the specified task sent by the control node, wherein the task configuration information represents a specified algorithm engine used for calculating the specified task;
the computing module is used for calling the specified algorithm engine to compute the specified task from the loaded algorithm engines based on the ciphertext data to obtain a computing result;
the data output module is used for sending the calculation result to a result receiver;
the control module is further configured to dynamically load a new algorithm engine dynamic library file when the new algorithm engine dynamic library file is received, and report capability information indicating that the new algorithm engine is loaded to the control node, where the new algorithm engine is an algorithm engine indicated by the new algorithm engine dynamic library file, and the new algorithm engine and each loaded algorithm engine both conform to the same calculation interface specification.
Further, the computing node further includes: a plurality of Application Program Interfaces (APIs);
the computing module is specifically configured to call a specified API that conforms to the specified task from the multiple APIs based on the ciphertext data;
and the specified API is used for calling the specified algorithm engine to calculate the specified task from the loaded algorithm engines based on the ciphertext data after being called by the calculation module, so as to obtain a calculation result.
Further, the computing node further includes: an interface conversion module;
the designated API is specifically configured to call the interface conversion module based on the ciphertext data, where the APIs all conform to a first computing interface specification supported by a first language, and the ciphertext data conforms to a data format required by the first computing interface specification;
the interface conversion module is used for converting the data format of the ciphertext data into a data format meeting the requirement of a second computing interface specification supported by a second language after being called by the API to obtain converted ciphertext data, and calling the specified algorithm engine from the loaded algorithm engines to compute the specified task based on the converted ciphertext data to obtain a computing result, wherein the new algorithm engine and the loaded algorithm engines both meet the second computing interface specification.
Further, the control module is further configured to, in a service initialization process after the computing node is started, obtain preset dynamic library files of each algorithm engine, and load the preset dynamic library files of each algorithm engine.
The embodiment of the application also provides a privacy computing system which comprises any one of the computing nodes and the control node.
The embodiment of the present application further provides a method for loading an algorithm engine for privacy computation, which is applied to a computing node, where each algorithm engine has been loaded in the computing node, and the method includes:
receiving ciphertext data sent by a data source node, wherein the ciphertext data is required for calculation aiming at a specified task;
receiving task configuration information of the specified task sent by a control node, wherein the task configuration information represents a specified algorithm engine for calculating the specified task;
in the process of calling the specified algorithm engine to calculate the specified task from the loaded algorithm engines based on the ciphertext data, when a new algorithm engine dynamic library file is received, dynamically loading the new algorithm engine dynamic library file;
and reporting the capability information indicating that a new algorithm engine is loaded to the control node, wherein the new algorithm engine is represented by the dynamic library file of the new algorithm engine, and the new algorithm engine and each loaded algorithm engine both conform to the same computing interface specification.
Further, the method further comprises:
acquiring preset dynamic library files of each algorithm engine in the service initialization process after the computing node is started;
and loading the preset dynamic library file of each algorithm engine.
An embodiment of the present application further provides a loading apparatus for an algorithm engine for privacy computation, which is applied to a computing node, where each algorithm engine has been loaded in the computing node, and the apparatus includes:
the data receiving module is used for receiving ciphertext data sent by a data source node, wherein the ciphertext data is required by calculation aiming at a specified task;
the task receiving module is used for receiving task configuration information of the specified task sent by the control node, wherein the task configuration information represents a specified algorithm engine used for calculating the specified task;
the file loading module is used for dynamically loading a new algorithm engine dynamic library file when the new algorithm engine dynamic library file is received in the process of calling the specified algorithm engine to calculate the specified task from the loaded algorithm engines based on the ciphertext data;
and the information reporting module is used for reporting the capacity information indicating that the new algorithm engine is loaded to the control node, wherein the new algorithm engine is represented by the dynamic library file of the new algorithm engine, and the new algorithm engine and each loaded algorithm engine both conform to the same computing interface specification.
Further, the file loading module is further configured to obtain preset dynamic library files of each algorithm engine in a service initialization process after the computing node is started;
and loading the preset dynamic library file of each algorithm engine.
Embodiments of the present application also provide a computing node, comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to: and implementing any one of the loading methods for the algorithm engine for privacy computation.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any one of the above-mentioned loading methods for an algorithm engine for privacy computation.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to perform any of the above-described methods for loading an algorithm engine for privacy computation.
The beneficial effect of this application includes:
in the scheme provided by the embodiment of the application, the computing node comprises a data input module, a control module, a computing module and a data output module, wherein the data input module is used for receiving ciphertext data sent by the data source node, and the ciphertext data is data required by computing aiming at a specified task; the control module is used for receiving task configuration information of a specified task sent by the control node, and the task configuration information represents a specified algorithm engine for calculating the specified task; the computing module is used for calling a specified algorithm engine from the loaded algorithm engines to compute specified tasks based on the ciphertext data to obtain a computing result; the data output module is used for sending the calculation result to the result receiver; and the control module is also used for dynamically loading the new algorithm engine dynamic library file when receiving the new algorithm engine dynamic library file, and reporting the capacity information indicating that the new algorithm engine is loaded to the control node, wherein the new algorithm engine is the algorithm engine indicated by the new algorithm engine dynamic library file, and the new algorithm engine and each loaded algorithm engine both accord with the same calculation interface specification. It can be known that a plurality of algorithm engines can be loaded on a computing node, when a computing task is performed, the algorithm engines are called by a computing module to perform computing, and meanwhile, when a new algorithm engine dynamic library file is received, a control module can dynamically load the new algorithm engine dynamic library file, so that the new algorithm engine is loaded, and the new algorithm engine and each loaded algorithm engine all conform to the same computing interface specification, so that the new algorithm engine can be used after being loaded. Therefore, by adopting the scheme, the available new algorithm engine is dynamically loaded in the process of calling the loaded algorithm engine to perform task calculation, and a privacy calculation product with the new algorithm engine does not need to be additionally installed, so that the cost of using the privacy algorithm provided by a third party by a user is reduced, and the implementation difficulty is also reduced.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application and not to limit the application. In the drawings:
fig. 1-1 is a schematic structural diagram of a compute node according to an embodiment of the present application;
fig. 1-2 are schematic structural diagrams of a computing node according to another embodiment of the present application;
fig. 1-3 are schematic structural diagrams of a computing node according to another embodiment of the present application;
FIG. 2-1 is a schematic structural diagram of a privacy computing system provided by an embodiment of the present application;
FIG. 2-2 is a schematic block diagram of a privacy computing system according to another embodiment of the present application;
FIG. 3 is a flowchart of a loading method of an algorithm engine for privacy computation according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a loading apparatus of an algorithm engine for privacy computation according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computing node according to another embodiment of the present application.
Detailed Description
In order to provide an implementation scheme for reducing the cost and implementation difficulty of using a privacy algorithm provided by a third party, the embodiment of the present application provides a computing node, a privacy computing system and a loading method of an algorithm engine, and the following description is made in conjunction with the accompanying drawings of the specification to describe preferred embodiments of the present application, it should be understood that the preferred embodiments described herein are only used for explaining and explaining the present application, and are not used for limiting the present application. And the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
An embodiment of the present application provides a computing node, as shown in fig. 1-1, including:
thedata input module 11 is configured to receive ciphertext data sent by a data source node, where the ciphertext data is data required for performing calculation for a specified task;
thecontrol module 12 is configured to receive task configuration information of a specified task sent by the control node, where the task configuration information indicates a specified algorithm engine for calculating the specified task;
thecalculation module 13 is configured to invoke a specified algorithm engine to calculate a specified task from each loaded algorithm engine based on the ciphertext data, so as to obtain a calculation result;
adata output module 14, configured to send the calculation result to a result receiving party;
thecontrol module 12 is further configured to, when receiving a new algorithm engine dynamic library file, dynamically load the new algorithm engine dynamic library file, and report capability information indicating that the new algorithm engine is loaded to the control node, where the new algorithm engine is an algorithm engine indicated by the new algorithm engine dynamic library file, and the new algorithm engine and each loaded algorithm engine all conform to the same calculation interface specification.
By adopting the computing node shown in fig. 1 provided in the embodiment of the present application, a plurality of algorithm engines can be loaded, and when a computing task is performed, the computing module calls the algorithm engines to perform computing, and meanwhile, when a new algorithm engine dynamic library file is received, the control module can dynamically load the new algorithm engine dynamic library file, so that a new algorithm engine is loaded, and the new algorithm engine and each loaded algorithm engine all conform to the same computing interface specification, so that the new algorithm engine can be used after being loaded. Therefore, by adopting the scheme, the available new algorithm engine is dynamically loaded in the process of calling the loaded algorithm engine to perform task calculation, and a privacy calculation product with the new algorithm engine does not need to be additionally installed, so that the cost of using the privacy algorithm provided by a third party by a user is reduced, and the implementation difficulty is also reduced.
In an embodiment of the application, the above-mentioned computing node, as shown in fig. 1-2, may further include: a plurality of APIs (Application Programming interfaces) 15;
correspondingly, thecalculation module 13 may be specifically configured to call a specifiedAPI 15 that meets a specified task from the plurality ofAPIs 15 based on the ciphertext data;
and a specifiedAPI 15, configured to, after being called by thecomputation module 13, call a specified algorithm engine to compute a specified task from the loaded algorithm engines based on the ciphertext data, and obtain a computation result.
TheAPIs 15 may include a special interface for private computation and an operation process common to both clear text and cipher text, and may be various basic operations (addition, subtraction, multiplication, division, and the like) for various data types, and based on a specific task, aspecific API 15 that meets the specific task is called from among theAPIs 15.
In practical applications, in view of facilitating the development of algorithm code by a user, theAPIs 15 may be implemented based on a first language, such as python language, and the algorithm engine is often implemented using a second language, such as c + + language, which is faster in operation, so that in an embodiment of the present application, as shown in fig. 1 to 3, the computing node may further include: aninterface conversion module 16;
correspondingly, the designatedAPI 15 may be specifically configured to call theinterface conversion module 16 based on the modified ciphertext data, where the plurality ofAPIs 15 all conform to the first computing interface specification supported by the first language, and the ciphertext data conform to the data format required by the first computing interface specification;
and theinterface conversion module 16 is configured to convert the data format of the ciphertext data into a data format meeting the requirement of a second computing interface specification supported by a second language after being called by theAPI 15, to obtain converted ciphertext data, and call a specified algorithm engine to compute a specified task from each loaded algorithm engine based on the converted ciphertext data, to obtain a computation result, where the new algorithm engine and each loaded algorithm engine both meet the second computing interface specification.
As shown in fig. 1-3, the computing node executes the operation of the algorithm code, and may be implemented by a 3-layer processing logic composed of anAPI 15, aninterface conversion module 16, and each algorithm engine. TheAPI 15 is implemented based on python language, and conforms to interface specification of phthon language, each algorithm engine based on ciphertext is implemented based on c + + language, that is, the interfaces provided by each algorithm engine are all c + + interfaces, a conversion layer, i.e., aninterface conversion module 16, is added between each algorithm engine and theAPI 15, theinterface conversion module 16 provides conversion between theAPI 15 and the interfaces provided by each algorithm engine, i.e., conversion of data format, and also provides a selection function of the algorithm engine.
At the API level, the ciphertext data operated by eachAPI 15 conforms to a uniform language interface specification, independent of the particular algorithm engine. The internal part of the algorithm engine comprises ciphertext types determined according to different algorithms, the ciphertext types in various algorithm engines are different, the memory allocation rule of each ciphertext type is determined by the algorithm engine, but the externally exposed names of the data types defined in the algorithm engines are the same.
The algorithm engines are loaded in the computing nodes in a dynamic library mode, and each task uses the algorithm engine to complete computation, and is determined by task configuration. During operation, theinterface conversion module 16 finds the specified algorithm engine and calls the interface provided by the specified algorithm engine, thereby realizing the calculation of the specified task by calling the specified algorithm engine.
In the embodiment of the present application, each algorithm engine may be various implementable algorithm engines, and for example, may include: ss4, 2pc, homo, sgx, etc.
In the embodiment of the present application, each algorithm engine may be loaded in the computing node in a dynamic loading manner, and may also be loaded in a preset manner, that is, thecontrol module 12 may also be used to obtain preset dynamic library files of each algorithm engine and load the preset dynamic library files of each algorithm engine in a service initialization process after the computing node is started.
Therefore, the computing node provided by the embodiment of the application can load the preset dynamic library files of each algorithm engine in the process of starting up and starting up to realize the loading of the corresponding algorithm engine, and can also dynamically load new dynamic library files of the algorithm engines in the process of calling the algorithm engines by the computing node to calculate to realize the loading of the corresponding new algorithm engines.
And moreover, standard computing interface specifications are provided for each algorithm engine, and a third-party algorithm provider only needs to pay attention to the computing interface by standardizing a bottom computing interface, so that the cost of using a privacy algorithm provided by a third party by a user is reduced, and the implementation difficulty is also reduced.
Based on the computing node provided in the embodiment of the present application, the embodiment of the present application further provides a privacy computing system, as shown in fig. 2-1, including thecomputing node 21 and thecontrol node 22 in any embodiment, where some functions and interaction flows related to the data plane between the computingnode 21 and thecontrol node 22 are described in detail in the foregoing, and the description is not repeated.
As shown in fig. 2-2, the privacy computing system includes acomputing node 21 and acontrol node 22, where thecontrol node 22 is further connected to aclient device 23, and thecomputing node 21 is connected to adata source node 24 and aresult receiver 25, respectively, where:
theclient device 23 is used as a device used by a user, and when privacy calculation is needed, a task request can be submitted to thecontrol node 22 to request calculation of a specified task;
after receiving the task request, thecontrol node 22 determines the computing resources and data nodes participating in the current computation according to the information of thedata source node 24 and theresult receiver 25 specified in the task request and the type of the computing nodes, that is, determines thecomputing nodes 21, thedata source node 24 and theresult receiver 25 participating in the current computation; on the interface of thecontrol node 22, the types of the algorithm engines supported by all the currently accessedcomputing nodes 21 may be displayed, and after one of the types is selected, thecontrol node 22 may select thecomputing node 21 with the algorithm of the type to participate in the task calculation;
thecontrol node 22 issues the task information to each determined node participating in the calculation in a configuration character string form, and obtains the calculation state of each node;
after receiving the task configuration, thedata source node 24 sends data specified in the task configuration to thecomputing node 21, and may send the data in a ciphertext manner;
after receiving the task configuration, thecomputing node 21 starts to execute the specified algorithm code in the task configuration, and if the required data is lacked during executing the algorithm code, the computing node waits for the data to arrive and then performs the computing, wherein the detailed process of the task computing, namely the flow of the data plane, is described in detail in the description content related to the computing node;
after thecomputing node 21 completes the designated task computation in the task configuration, it notifies thecontrol node 22, and thecontrol node 22 forwards the task computation completion message to theresult receiver 25;
after receiving the task calculation completion message, theresult receiver 25 acquires the specified calculation result from thecalculation node 21;
thecalculation node 21 completes all calculations, and notifies thecontrol node 22, theresult receiver 25 and thedata source node 24 to obtain a task end mark after all calculation results are taken away;
after theresult receiving side 25 and thedata source node 24 obtain the task end flag, thecomputing node 21 and thecontrol node 22 end the task.
As can be seen from the above processing flow of the control plane, the flow of the control plane is not related to a specific type of a computing node, nor is it related to specific computing contents. In a private computing system, any type of compute node and compute task may be implemented by the control flow described above.
Based on the foregoing computing node provided in the embodiment of the present application, an embodiment of the present application further provides a method for loading an algorithm engine for privacy computation, which is applied to a computing node, where each algorithm engine has been loaded in the computing node, as shown in fig. 3, the method includes:
and 31, receiving ciphertext data sent by the data source node, wherein the ciphertext data is required for calculation aiming at the specified task.
And step 32, receiving task configuration information of the specified task sent by the control node, wherein the task configuration information represents a specified algorithm engine for calculating the specified task.
And step 33, in the process of calling the specified algorithm engine from the loaded algorithm engines to calculate the specified task based on the ciphertext data, dynamically loading a new algorithm engine dynamic library file when receiving the new algorithm engine dynamic library file.
And step 34, reporting the capability information indicating that the new algorithm engine is loaded to the control node, wherein the new algorithm engine is the algorithm engine indicated by the dynamic library file of the new algorithm engine, and the new algorithm engine and each loaded algorithm engine all accord with the same calculation interface specification.
By adopting the loading method of the algorithm engine shown in fig. 3, when the algorithm engine is called to perform task calculation and a new dynamic library file of the algorithm engine is received, the new dynamic library file of the algorithm engine can be dynamically loaded, so that the new algorithm engine is loaded, and the new algorithm engine and each loaded algorithm engine conform to the same calculation interface specification, so that the new algorithm engine can be used after being loaded. Therefore, by adopting the scheme, the available new algorithm engine is dynamically loaded in the process of calling the loaded algorithm engine to perform task calculation, and a privacy calculation product with the new algorithm engine does not need to be additionally installed, so that the cost of using the privacy algorithm provided by a third party by a user is reduced, and the implementation difficulty is also reduced.
Further, the method may further include:
acquiring preset dynamic library files of each algorithm engine in the service initialization process after the computing node is started;
and loading preset dynamic library files of each algorithm engine.
Therefore, by adopting the loading method of the algorithm engine provided by the embodiment of the application, the preset dynamic library files of each algorithm engine can be loaded in the starting process to realize the loading of the corresponding algorithm engine, and the new dynamic library files of the algorithm engine can also be dynamically loaded in the process that the calculation node calls the algorithm engine to calculate to realize the loading of the corresponding new algorithm engine.
Based on the same inventive concept, according to the loading method of the algorithm engine for privacy computation provided in the foregoing embodiment of the present application, correspondingly, another embodiment of the present application further provides a loading apparatus of the algorithm engine for privacy computation, which is applied to a computing node, where each algorithm engine has been loaded in the computing node, and a schematic structural diagram of the computing node is shown in fig. 4, and specifically includes:
thedata receiving module 41 is configured to receive ciphertext data sent by a data source node, where the ciphertext data is data required for performing calculation for a specified task;
thetask receiving module 42 is configured to receive task configuration information of a specified task sent by the control node, where the task configuration information indicates a specified algorithm engine for calculating the specified task;
thefile loading module 43 is configured to, in the process of invoking a specified algorithm engine to calculate a specified task from the loaded algorithm engines based on the ciphertext data, dynamically load a new algorithm engine dynamic library file when the new algorithm engine dynamic library file is received;
and theinformation reporting module 44 is configured to report, to the control node, capability information indicating that the new algorithm engine is loaded, where the new algorithm engine is an algorithm engine indicated by a dynamic library file of the new algorithm engine, and the new algorithm engine and each loaded algorithm engine all conform to the same computing interface specification.
Further, thefile loading module 43 is further configured to obtain a preset dynamic library file of each algorithm engine in a service initialization process after the computing node is started;
and loading preset dynamic library files of each algorithm engine.
The functions of the above modules may correspond to the corresponding processing steps in the flow shown in fig. 3, and are not described herein again.
The loading device of the algorithm engine for privacy computation provided by the embodiment of the application can be realized by a computer program. It should be understood by those skilled in the art that the above-mentioned module division is only one of many module division, and if the division is performed into other modules or not, it is within the scope of the present application as long as the wall painting and printing apparatus has the above-mentioned functions.
Based on the same inventive concept, according to the loading method of the algorithm engine for privacy computation provided in the foregoing embodiment of the present application, correspondingly, another embodiment of the present application further provides a computing node, whose structural schematic diagram is shown in fig. 5, and includes aprocessor 51 and a machine-readable storage medium 52, where the machine-readable storage medium 52 stores machine-executable instructions that can be executed by theprocessor 51, and theprocessor 51 is caused by the machine-executable instructions to: and implementing any one of the loading methods for the algorithm engine for privacy computation.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method for loading any one of the above algorithm engines for privacy computation is implemented.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to perform any of the above-described methods for loading an algorithm engine for privacy computation.
The machine-readable storage medium in the electronic device may include a Random Access Memory (RAM) and a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the computing node, the computer-readable storage medium, and the computer program product embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

CN202110831987.1A2021-07-222021-07-22Computing node, privacy computing system and loading method of algorithm engineActiveCN113486332B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110831987.1ACN113486332B (en)2021-07-222021-07-22Computing node, privacy computing system and loading method of algorithm engine

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110831987.1ACN113486332B (en)2021-07-222021-07-22Computing node, privacy computing system and loading method of algorithm engine

Publications (2)

Publication NumberPublication Date
CN113486332Atrue CN113486332A (en)2021-10-08
CN113486332B CN113486332B (en)2024-09-10

Family

ID=77942088

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110831987.1AActiveCN113486332B (en)2021-07-222021-07-22Computing node, privacy computing system and loading method of algorithm engine

Country Status (1)

CountryLink
CN (1)CN113486332B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114489950A (en)*2022-01-272022-05-13上海富数科技有限公司Component adapting method and device, electronic equipment and storage medium
CN116055564A (en)*2022-12-282023-05-02支付宝(杭州)信息技术有限公司Cross-platform task scheduling method, computing task executing method and device
WO2024011826A1 (en)*2022-07-152024-01-18中国银联股份有限公司Privacy computing device, method and system, and electronic device and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110287356A (en)*2019-05-162019-09-27罗普特科技集团股份有限公司It is a kind of for the evaluation and test of face recognition algorithms engine, call method and system
CN110704037A (en)*2019-08-282020-01-17贵州医渡云技术有限公司Rule engine implementation method and device
CN110968767A (en)*2018-09-282020-04-07北京嘀嘀无限科技发展有限公司Ranking engine training method and device, and business card ranking method and device
CN111273916A (en)*2018-12-042020-06-12北京京东金融科技控股有限公司Algorithmic heat deployment method, apparatus, computer system and medium
CN111581299A (en)*2020-04-302020-08-25北华航天工业学院Inter-library data conversion system and method of multi-source data warehouse based on big data
CN112182644A (en)*2020-09-112021-01-05华控清交信息科技(北京)有限公司Data processing method and device and electronic equipment
CN112187862A (en)*2020-08-312021-01-05华控清交信息科技(北京)有限公司Task processing method and device for task processing
CN112256414A (en)*2020-10-192021-01-22浪潮天元通信信息系统有限公司Method and system for connecting multiple computing storage engines
CN112306586A (en)*2020-11-202021-02-02深圳前海微众银行股份有限公司Data processing method, device, equipment and computer storage medium
CN112883882A (en)*2021-02-262021-06-01武汉卓鹰世纪科技有限公司Face recognition fusion processing method and system
CN113032283A (en)*2021-05-202021-06-25华控清交信息科技(北京)有限公司Ciphertext operation debugging method, calculation engine and ciphertext operation system
CN113032423A (en)*2021-05-312021-06-25北京谷数科技股份有限公司Query method and system based on dynamic loading of multiple data engines
CN113139205A (en)*2021-04-062021-07-20华控清交信息科技(北京)有限公司Secure computing method, general computing engine, device for secure computing and secure computing system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110968767A (en)*2018-09-282020-04-07北京嘀嘀无限科技发展有限公司Ranking engine training method and device, and business card ranking method and device
CN111273916A (en)*2018-12-042020-06-12北京京东金融科技控股有限公司Algorithmic heat deployment method, apparatus, computer system and medium
CN110287356A (en)*2019-05-162019-09-27罗普特科技集团股份有限公司It is a kind of for the evaluation and test of face recognition algorithms engine, call method and system
CN110704037A (en)*2019-08-282020-01-17贵州医渡云技术有限公司Rule engine implementation method and device
CN111581299A (en)*2020-04-302020-08-25北华航天工业学院Inter-library data conversion system and method of multi-source data warehouse based on big data
CN112187862A (en)*2020-08-312021-01-05华控清交信息科技(北京)有限公司Task processing method and device for task processing
CN112182644A (en)*2020-09-112021-01-05华控清交信息科技(北京)有限公司Data processing method and device and electronic equipment
CN112256414A (en)*2020-10-192021-01-22浪潮天元通信信息系统有限公司Method and system for connecting multiple computing storage engines
CN112306586A (en)*2020-11-202021-02-02深圳前海微众银行股份有限公司Data processing method, device, equipment and computer storage medium
CN112883882A (en)*2021-02-262021-06-01武汉卓鹰世纪科技有限公司Face recognition fusion processing method and system
CN113139205A (en)*2021-04-062021-07-20华控清交信息科技(北京)有限公司Secure computing method, general computing engine, device for secure computing and secure computing system
CN113032283A (en)*2021-05-202021-06-25华控清交信息科技(北京)有限公司Ciphertext operation debugging method, calculation engine and ciphertext operation system
CN113032423A (en)*2021-05-312021-06-25北京谷数科技股份有限公司Query method and system based on dynamic loading of multiple data engines

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114489950A (en)*2022-01-272022-05-13上海富数科技有限公司Component adapting method and device, electronic equipment and storage medium
WO2024011826A1 (en)*2022-07-152024-01-18中国银联股份有限公司Privacy computing device, method and system, and electronic device and medium
TWI843356B (en)*2022-07-152024-05-21大陸商中國銀聯股份有限公司 A privacy computing device, method, system, electronic device and medium
CN116055564A (en)*2022-12-282023-05-02支付宝(杭州)信息技术有限公司Cross-platform task scheduling method, computing task executing method and device
CN116055564B (en)*2022-12-282024-08-16支付宝(杭州)信息技术有限公司Cross-platform task scheduling method, computing task executing method and device

Also Published As

Publication numberPublication date
CN113486332B (en)2024-09-10

Similar Documents

PublicationPublication DateTitle
CN113486332B (en)Computing node, privacy computing system and loading method of algorithm engine
US11570043B2 (en)Method and apparatus for mapping network slices onto network infrastructures with SLA guarantee
CN111899008B (en)Resource transfer method, device, equipment and system
US20200167713A1 (en)Business processing method, apparatus, device and system using the same, and readable storage medium of the same
US11269611B2 (en)Data interface processing method, device, server and medium
CN110289983B (en)Load balancing application creation method and device, computer equipment and storage medium
CN111258565A (en) Small program generation method, system, server and storage medium
CN107818023B (en)Thread-based message processing method, intelligent device and storage medium
EP3197122A1 (en)Method of forwarding in-application traffic on smart mobile terminal
CN112036558A (en)Model management method, electronic device, and medium
CN111885184A (en)Method and device for processing hot spot access keywords in high concurrency scene
CN113946389A (en) Federated learning process execution optimization method, device, storage medium and program product
CN112488688B (en)Transaction processing method, device, equipment and storage medium based on blockchain
CN105094878A (en)System library file integration method and apparatus
CN111176641B (en)Flow node execution method, device, medium and electronic equipment
CN117035619B (en)Logistics supply chain scene cooperation method, system and equipment based on user definition
CN111274017B (en)Resource processing method and device, electronic equipment and storage medium
WO2015003605A1 (en)Systems and methods for content transmission for instant messaging
CN112988339A (en)Data management method and device
CN115408177A (en)Aggregation editing method, device, equipment and storage medium
CN115484149A (en)Network switching method, network switching device, electronic device and storage medium
CN113138870A (en)Service invocation method, device, system, medium and product in distributed computing
CN109726009B (en)Big data computing method, device, equipment and computer readable storage medium
CN112328598A (en)ID generation method, device, electronic equipment and storage medium
CN113779021A (en)Data processing method, device, computer system and readable storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp