Movatterモバイル変換


[0]ホーム

URL:


CN111625258A - Mercker tree updating method, device, equipment and readable storage medium - Google Patents

Mercker tree updating method, device, equipment and readable storage medium
Download PDF

Info

Publication number
CN111625258A
CN111625258ACN202010453608.5ACN202010453608ACN111625258ACN 111625258 ACN111625258 ACN 111625258ACN 202010453608 ACN202010453608 ACN 202010453608ACN 111625258 ACN111625258 ACN 111625258A
Authority
CN
China
Prior art keywords
target
preset
hash coding
coding model
hash
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010453608.5A
Other languages
Chinese (zh)
Other versions
CN111625258B (en
Inventor
范力欣
吴锦和
张天豫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co LtdfiledCriticalWeBank Co Ltd
Priority to CN202010453608.5ApriorityCriticalpatent/CN111625258B/en
Publication of CN111625258ApublicationCriticalpatent/CN111625258A/en
Priority to PCT/CN2021/093397prioritypatent/WO2021233182A1/en
Application grantedgrantedCritical
Publication of CN111625258BpublicationCriticalpatent/CN111625258B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种默克尔树更新方法、装置、设备及可读存储介质,所述默克尔树更新方法包括:获取更新数据和待更新默克尔树,并确定所述待更新默克树对应的原数据集和预设哈希编码模型,进而基于所述更新数据和所述原数据集,对所述预设哈希编码模型进行训练更新,获得目标哈希编码模型,进而基于所述目标哈希编码模型和所述更新数据,更新所述待更新默克尔树的随机叶子节点层,获得目标默克尔树。本申请解决了默克尔树更新时计算效率低的技术问题。

Figure 202010453608

The present application discloses a Merkle tree updating method, apparatus, device and readable storage medium. The Merkle tree updating method includes: acquiring update data and a Merkle tree to be updated, and determining the Merkle tree to be updated The original data set and the preset hash coding model corresponding to the gram tree, and then based on the updated data and the original data set, the preset hash coding model is trained and updated to obtain the target hash coding model, and then based on The target hash coding model and the update data update the random leaf node layer of the Merkle tree to be updated to obtain the target Merkle tree. The present application solves the technical problem of low computational efficiency when updating Merkle trees.

Figure 202010453608

Description

Mercker tree updating method, device, equipment and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence of financial technology (Fintech), and in particular, to a method, an apparatus, a device, and a readable storage medium for updating a mercker tree.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, the application field of artificial intelligence is becoming more and more extensive, and at present, a merkel (Merkle) tree is a data structure for rapidly verifying data integrity. The principle is that through packet hashing, hash value matching can be traced from leaf nodes to root nodes quickly, and the purpose of reducing the calculation complexity in data query is achieved, however, when a random leaf node of a tacle tree needs to be updated, all nodes of a tree branch where the random leaf node is located need to be updated in addition to the random leaf node, and further, the calculation amount in the tacle tree updating is too large, the calculation complexity is too high, and further, the calculation efficiency in the tacle tree updating is too low, and therefore, the technical problem of low calculation efficiency in the tacle tree updating exists in the prior art.
Disclosure of Invention
The present application mainly aims to provide a method, an apparatus, a device and a readable storage medium for updating a mercker tree, and aims to solve the technical problem of low computation efficiency in the mercker tree updating in the prior art.
To achieve the above object, the present application provides a method for updating a mercker tree, including:
acquiring updating data and a Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
and updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the updating data to obtain the target Mercker tree.
Optionally, the update data comprises a newly added data block,
the step of updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the update data to obtain the target Mercker tree comprises the following steps:
generating a target random leaf node corresponding to the newly added data block, and inputting the newly added data block into the target hash coding model to obtain an output hash coding value corresponding to the newly added data block;
matching a target father node corresponding to the target random leaf node based on the output hash code value;
and updating the random leaf node layer based on the target random leaf node and the target father node to obtain the target Merck tree.
Optionally, the step of training and updating the preset hash coding model based on the updated data and the raw data set to obtain the target hash coding model includes:
determining a target data set based on the updated data and the original data set, and determining a data category set corresponding to the target data set;
acquiring a target hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and a target hash code value corresponding to the target hash code value set;
and performing iterative training on the preset Hash coding model based on the training data and the target Hash coding value to optimize a polarization loss function corresponding to the preset Hash coding model until the preset Hash coding model reaches a preset iteration ending condition, and obtaining the target Hash coding model.
Optionally, the iteratively training the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration end condition, and the obtaining the target hash coding model includes:
inputting the training data into the preset Hash coding model to carry out Hash coding on the training data based on the polarization loss function to obtain an initial Hash coding value;
calculating a training Hamming distance between the initial Hash code value and the target Hash code value, and comparing the training Hamming distance with a preset Hamming distance threshold value;
if the training Hamming distance is larger than the preset Hamming distance threshold value, judging that the preset Hash coding model does not reach the preset iteration ending condition, and optimizing the polarization loss function based on the initial Hash coding value;
retraining the preset Hash coding model based on the optimized polarization loss function until the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value;
and if the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value, judging that the preset Hash coding model reaches the preset iteration ending condition, and taking the preset Hash coding model as the target Hash coding model.
Optionally, the step of inputting the training data into the preset hash coding model to perform hash coding on the training data based on the polarization loss function, and obtaining an initial hash coding value includes:
inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
polarizing the training hash result based on the polarization loss function to obtain a polarization result;
determining the initial hash-code value based on the polarization result.
Optionally, the raw data set includes one or more raw data blocks and a target hash code value corresponding to each raw data block, the to-be-updated mercker tree includes one or more random leaf nodes, one or more intermediate nodes, and a root node,
before the step of obtaining the update data and the Merck tree to be updated and determining the original data set and the preset Hash coding model corresponding to the Merck tree to be updated, the Merck tree updating method comprises the following steps:
generating random leaf nodes corresponding to the original data blocks, wherein one original data block corresponds to one random leaf node;
and generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
Optionally, each of the intermediate nodes includes one or more first-layer intermediate nodes and one or more upper-layer intermediate nodes;
the step of generating each intermediate node and the root node based on each target hash value and the preset hash coding model includes:
generating a first layer intermediate node corresponding to each target hash code value, wherein one target hash code value corresponds to one first layer intermediate node;
and circularly generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
The present application further provides a merkel tree updating apparatus, which is a virtual apparatus and is applied to merkel tree updating equipment, and the merkel tree updating apparatus includes:
the determining module is used for acquiring the updating data and the Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
the training module is used for training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
and the updating module is used for updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the updating data to obtain the target Mercker tree.
Optionally, the update module includes:
the hash coding submodule is used for generating a target random leaf node corresponding to the newly added data block, inputting the newly added data block into the target hash coding model and obtaining an output hash coding value corresponding to the newly added data block;
the matching submodule is used for matching a target father node corresponding to the target random leaf node based on the output hash code value;
and the updating submodule is used for updating the random leaf node layer based on the target random leaf node and the target father node to obtain the target Mercker tree.
Optionally, the training module comprises:
the extraction submodule is used for determining a target data set based on the updated data and the original data set and determining a data category set corresponding to the target data set;
and the iterative training submodule is used for performing iterative training on the preset Hash coding model based on the training data and the target Hash coding value so as to optimize a polarization loss function corresponding to the preset Hash coding model until the preset Hash coding model reaches a preset iteration ending condition, and obtaining the target Hash coding model.
Optionally, the iterative training submodule includes:
a hash coding unit, configured to input the training data into the preset hash coding model, so as to perform hash coding on the training data based on the polarization loss function, and obtain an initial hash coding value;
the comparison unit is used for calculating a training Hamming distance between the initial Hash code value and the target Hash code value and comparing the training Hamming distance with a preset Hamming distance threshold value;
a first determination unit, configured to determine that the preset hash coding model does not reach the preset iteration end condition if the training hamming distance is greater than the preset hamming distance threshold, and optimize the polarization loss function based on the initial hash coding value;
a retraining unit, configured to retrain the preset hash coding model based on the optimized polarization loss function until the training hamming distance is less than or equal to the preset hamming distance threshold;
and the second judging unit is used for judging that the preset Hash coding model reaches the preset iteration ending condition if the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value, and taking the preset Hash coding model as the target Hash coding model.
Optionally, the hash encoding unit includes:
the Hash subunit is used for inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
the polarising subunit is used for polarising the training hash result based on the polarization loss function to obtain a polarising result;
a determining subunit, configured to determine the initial hash code value based on the polarization result.
Optionally, the merkel tree updating apparatus further includes:
a first generation module, configured to generate a random leaf node corresponding to each original data block, where one original data block corresponds to one random leaf node;
and the second generation module is used for generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
Optionally, the second generating module includes:
the generation submodule is used for generating first-layer intermediate nodes corresponding to the target Hash code values, wherein one target Hash code value corresponds to one first-layer intermediate node;
and the cyclic generation submodule is used for cyclically generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
The present application also provides a merkel tree updating device, where the merkel tree updating device is an entity device, and the merkel tree updating device includes: a memory, a processor and a program of the merkel tree updating method stored on the memory and executable on the processor, which program, when executed by the processor, may implement the steps of the merkel tree updating method as described above.
The present application also provides a readable storage medium having stored thereon a program for implementing the method for mercker tree updating, which when executed by a processor implements the steps of the method for mercker tree updating as described above.
The method comprises the steps of obtaining updating data and a Merck tree to be updated, determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated, training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model, updating a random leaf node layer of the Merck tree to be updated based on the target Hash coding model and the updating data, and obtaining the target Merck tree. That is, the present application provides a to-be-updated tacher tree constructed based on a preset hash coding model, and further, when the to-be-updated tacher tree needs to be updated, update data and an original data set corresponding to the to-be-updated tacher tree are obtained, and further, based on the update data and the original data set, training and updating of the preset hash coding model are performed to obtain a target hash coding model, and further, based on the target hash coding model and the update data, a random leaf node layer of the to-be-updated tacher tree is updated to obtain the target tacher tree. That is, when updating the Merck tree to be updated, no matter how many data on the random leaf nodes are updated by the Merck tree, only one training update needs to be performed on the preset Hash coding model, and then based on the Hash coding model obtained by training and updating, the random leaf node layer of the Mercker tree to be updated is updated, the updating of the Mercker tree to be updated can be completed, and compared with the current updating method of the Mercker tree, when the random leaf node of the Mercker tree needs to be updated, the update of the Mercker tree to be updated can be completed only by updating the random leaf node layer of the Mercker tree, all the nodes of the tree branch where the random leaf node is located do not need to be updated, further, the calculation amount during the update of the Mercker tree is reduced, and further the calculation complexity during the update of the Mercker tree is reduced, so that the technical problem of low calculation efficiency during the update of the Mercker tree is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a first embodiment of the Mercker tree updating method of the present application;
FIG. 2 is a schematic diagram of a binary Merck tree in the Merck tree updating method of the present application;
FIG. 3 is a schematic flow chart diagram of a second embodiment of the Mercker tree updating method of the present application;
fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the mercker tree updating method of the present application, referring to fig. 1, the mercker tree updating method includes:
step S10, acquiring update data and a Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
in this embodiment, it should be noted that the original data set includes one or more original data blocks, each of the original data blocks corresponds to one or more data categories, one of the data categories corresponds to a target hash code value, where the target hash code value is obtained by hash-coding an original data block corresponding to the corresponding data category based on a preset hash coding method, where the preset hash coding method includes random target hash coding, adaptive target hash coding, and the like, and it is noted that a hamming distance between any two original data blocks in each of the data categories is smaller than or equal to a preset first hamming distance threshold, for example, if the preset first hamming distance threshold is 1, the original data block a is 0101010111, and the original data block B is 0101010101, a hamming distance between the original data block a and the original data block B is 1, and judging that the original data block A and the original data block B belong to the same data category.
Additionally, it should be noted that the updated data is new added data added to the to-be-updated merkel tree, where the new added data includes one or more new added data blocks, and the to-be-updated merkel tree is a data structure storing the original data set, where the to-be-updated merkel tree includes a random leaf node layer, each intermediate node layer, and a root node, where the random leaf node layer includes one or more random leaf nodes, and the intermediate node layer includes one or more intermediate nodes, as shown in fig. 2, a binary merkel tree diagram is shown, where a single node at the top layer is the root node, nodes at the bottom layer 1, 2, 3, 4, 5, 6, 7, 8 are all random leaf nodes, and a node between the top layer and the bottom layer is each intermediate node, and additionally, it should be noted that, the random leaf node is a node for storing the original data block, one random leaf node corresponds to one original data block, the intermediate node and the root node are nodes for storing a target model output value of the preset hash coding model, wherein the target model output value is a hash coding value obtained based on the preset hash coding model and the model input value, in the to-be-updated merkel tree, the model input value corresponding to the model output value stored in each layer of node is the hash coding value stored in the previous layer, the model input value corresponding to the model output value stored in the root node is the hash coding value corresponding to the last intermediate node layer, wherein it needs to be stated that, for the preset hash coding model, the original data blocks belonging to the same data category are input into the preset hash coding model, the same output hash code value is to be output, the output hash code value is a target hash code value corresponding to the data category, and similarly, the model input values belonging to the same category, and the preset hash code value may all output the same model output value, for example, assuming that in the to-be-updated merkel tree, a node layer M includes 4 nodes (a, B, C, D), and a previous node layer N corresponding to the node layer M includes 2 nodes (E, F), where the node a and the node B point to a node E, and the node C and the node D point to a node F, and if the preset hash code model is H, H (a), (E), (B), (E), (C), (F), and H (D), (F) are satisfied.
Obtaining update data and a to-be-updated Merck tree, determining an original data set and a preset Hash code model corresponding to the to-be-updated Merck tree, specifically, extracting each newly-added data block and the to-be-updated Merck tree from a preset storage database, determining an original data block corresponding to each random leaf node in the to-be-updated Merck tree and a target Hash code value corresponding to each original data block, and obtaining a preset Hash code model used for constructing the to-be-updated Merck tree, wherein the to-be-updated Merck tree is constructed based on the preset Hash code model, each original data block and each target Hash code value, that is, firstly generating a random leaf node layer corresponding to each original data block, and generating a first intermediate node layer corresponding to each target Hash code value, wherein the first intermediate node layer is an adjacent previous node layer of the random leaf node layers, further, each target hash coding layer is used as the input of the preset hash coding model to obtain each corresponding model output value, and then a second intermediate node layer corresponding to each model output value is generated, wherein the second intermediate node layer is the adjacent previous node layer of the first intermediate node layer, and then each model output value is used as the input of the preset hash coding model, and the generation of each intermediate node layer is circularly performed until the root node is obtained, and then the construction of the to-be-updated merck tree is completed.
Step S20, based on the updated data and the original data set, training and updating the preset Hash coding model to obtain a target Hash coding model;
in this embodiment, based on the update data and the original data set, the preset hash coding model is trained and updated to obtain a target hash coding model, specifically, each new data block is added to the original data set, a corresponding target hash coding value is matched for each new data block in each target hash coding to obtain the target data set, training data and a target hash coding value set corresponding to the training data are selected from the target data set, where the training data includes one or more training data blocks in the target data set, the target hash coding value set includes a target hash coding value corresponding to each training data block, and then the preset hash coding model is iteratively trained and updated based on the target hash coding value set and the training data, and obtaining the target Hash coding model until a preset iteration ending condition is met.
The step of training and updating the preset hash coding model based on the updated data and the original data set to obtain a target hash coding model comprises the following steps:
step S21, determining a target data set based on the updated data and the original data set, and determining a data category set corresponding to the target data set;
in this embodiment, it should be noted that the target data set includes one or more training data blocks, and the data category set includes data categories corresponding to the training data blocks.
Step S22, acquiring a target Hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and a target Hash code value corresponding to the target Hash code value set;
in this embodiment, it should be noted that the target hash code value set includes target hash code values corresponding to the data categories.
Acquiring a target hash code value set corresponding to the data category set, and determining training data corresponding to the target data set and a target hash code value corresponding to the target hash code value set, specifically, acquiring a target hash code value corresponding to each data category, extracting the training data from the target data set, and extracting a target hash code value corresponding to the training data from the target hash code set.
Preferably, a preset number of data blocks are extracted from each data category respectively to serve as training data blocks, each training data block is used as the training data, and a target hash code value corresponding to the data category of each training data block is determined in the target hash code value set.
Step S23, performing iterative training on the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration end condition, and obtaining the target hash coding model.
In this embodiment, it should be noted that the preset iteration ending condition includes reaching a preset iteration number threshold, converging a polarization loss function, and the like, and the iterative training includes one or more rounds of training.
Performing iterative training on the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration end condition, obtaining the target hash coding model, specifically, extracting a training data block from the training data to input into the preset hash coding model, performing training and updating on the preset hash coding model based on the target hash coding value corresponding to the training data block, and judging whether the preset hash coding model after the training and updating meets the preset iteration end condition, if the preset hash coding model after the training and updating meets the preset iteration end condition, taking the preset hash coding model after the training and updating as the target hash coding model, if the preset Hash code model after the training update does not meet the preset iteration ending condition, acquiring an initial Hash code value of the training of the current round, optimizing the polarization loss function based on the initial Hash code value and the target Hash code result, and training and updating the optimized preset Hash code model again until the preset Hash code model after the training and updating meets the preset iteration ending condition.
The step of performing iterative training on the preset hash coding model based on the training data and the target hash coding value to optimize a polarization loss function corresponding to the preset hash coding model until the preset hash coding model reaches a preset iteration ending condition includes:
step S221, inputting the training data into the preset Hash coding model, and carrying out Hash coding on the training data based on the polarization loss function to obtain an initial Hash coding value;
in this embodiment, the training data is input to the preset hash coding model, so as to perform hash coding on the training data based on the polarization loss function, to obtain an initial hash coding value, specifically, a training data block is extracted from the training data, and a to-be-processed training matrix corresponding to the training data block is input to the preset hash coding model, where the to-be-processed training matrix is a matrix representation form of the training data block and is used to store data in the to-be-processed training matrix, so as to perform hash on the to-be-processed training matrix, to obtain a hash vector, and further perform forced polarization on each bit of the hash vector based on the polarization loss function, to obtain a polarization vector corresponding to the hash vector, and further generate the initial hash coding value based on a polarization identifier corresponding to each bit in the polarization vector, wherein the polarization loss function is as follows,
L(v,t^c)=max(m-v*t^c,0)
wherein L is the polarization loss function, m is a preset forced polarization parameter, v is a value at each hash vector bit in the hash vector, and the absolute value of v is greater than m, t ^ c is a target hash value corresponding to the hash vector bit, the target hash value is a bit value at a target hash code value corresponding to the training data block, and t ^ c { -1, +1}, and the polarization loss function converges to 0, for example, assuming that m is 1, t ^ c is 1, v is-1, at this time, L ^ 2, if the polarization loss function converges to 0, it is necessary to force polarization on v so that v is 1, at this time, L ^ 0, and further when t ^ c is equal to 1, the value at the hash vector bit will gradually move away from 0 in the positive direction, and when t ^ c is equal to-1, the numerical values on the bits of the hash vector are gradually far away from 0 in the negative direction, and then after the polarization is successful, the polarization identifier of each bit in the obtained polarization vector is consistent with the corresponding target hash value, further, because the target hash code values of the same data category are the same, the polarization identifiers on each bit in the polarization vector corresponding to each training data block belonging to the same data category are consistent, and further, based on each polarization identifier, the obtained model output values are consistent, that is, for the model input data belonging to the same data category, the trained preset hash code model can output the same model output value.
Additionally, it should be noted that each bit in the hash vector corresponds to a polarization output channel of the preset hash code, and the preset forced polarization parameter corresponding to each polarization output channel is obtained by training the preset hash code model, and further the preset forced polarization parameter corresponding to each polarization output channel may be the same or different, where the polarization output channel is configured to force-polarize a value on each bit in the hash vector through the corresponding polarization loss function based on the preset forced polarization parameter, and output a coded value of the corresponding bit in the initial hash code value.
Wherein the step of inputting the training data into the preset hash coding model to perform hash coding on the training data based on the polarization loss function to obtain an initial hash coding value includes:
step A10, inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
in this embodiment, it should be noted that the preset hash coding model includes a hidden layer and a hash layer, where the hidden layer is one or more layers of neural networks for performing data processing, and the data processing includes convolution. Pooling and the like, wherein the hash layer is one or more layers of neural networks for hashing.
Inputting the training data into the preset Hash coding model, carrying out Hash on the training data to obtain a training Hash result, specifically, extracting training data blocks in the training data, inputting a to-be-processed training matrix corresponding to the training data blocks into the hidden layer, carrying out convolution and pooling processing on the to-be-processed training matrix for preset times to obtain a feature representation matrix, further inputting the feature representation matrix into the Hash layer, carrying out full connection on the feature representation matrix to obtain a Hash vector, and taking the Hash vector as the training Hash result.
Step A20, polarizing the training hash result based on the polarization loss function to obtain a polarized result;
in this embodiment, the training hash result is polarized based on the polarization loss function to obtain a polarization result, specifically, each bit in the hash vector is polarized based on the polarization loss function to obtain a polarization vector, and the polarization vector is used as the polarization result, for example, assuming that the hash vector is (-1, -8), the polarization vector is obtained as (1, -8).
Step a30, determining the initial hash code value based on the polarization result.
In this embodiment, the initial hash code value is determined based on the polarization result, specifically, a polarization identifier corresponding to each bit in the polarization result is extracted, where the polarization identifier is a positive or negative sign of the bit, and then the initial hash code value is determined based on each polarization identifier, for example, if the polarization result is (1, -8, -7, 0.9), the initial hash code value is 1001.
Step S222, calculating a training Hamming distance between the initial Hash code value and the target Hash code value, and comparing the training Hamming distance with a preset Hamming distance threshold value;
in this embodiment, a training hamming distance between the initial hash code value and the target hash code result is calculated, and the training hamming distance is compared with a preset hamming distance threshold, specifically, a value on each bit of the initial hash code value is compared with a value on each bit of the target hash code result, a bit number that is different between the initial hash code value and the target hash code result is determined, the bit number is used as the training hamming distance, and the training hamming distance is compared with the preset hamming distance threshold, for example, if the initial hash code value is a vector (1, 1, 1, 1), and the target hash code result is a vector (-1, 1, 1, -1), the bit number is 2, the training hamming distance is 2.
Step S223, if the training hamming distance is greater than the preset hamming distance threshold, determining that the preset hash coding model does not reach the preset iteration end condition, and optimizing the polarization loss function based on the initial hash coding value;
in this embodiment, if the training hamming distance is greater than the preset hamming distance threshold, it is determined that the preset hash coding model does not reach the preset iteration end condition, and the polarization loss function is optimized based on the initial hash coding value, specifically, if the training hamming distance is greater than the preset hamming distance threshold, it is determined that the polarization loss function does not converge on all bits of the training matrix to be hashed, that is, the polarization loss function does not converge, and it is further determined that the preset hash coding model does not reach the preset iteration end condition, and further one or more different bits between the initial hash coding value and the target hash coding result are determined, and the non-converged polarization output channels corresponding to the different bits are determined, and the preset forced polarization parameters in the polarization loss functions corresponding to the non-converged polarization output channels are adjusted, the non-convergence polarization output channel is a polarization output channel corresponding to a non-convergence polarization loss function, wherein the preset hash coding model includes one or more polarization output channels, and the number of the polarization output channels is related to the number of bits of the training matrix to be hashed, that is, one bit corresponds to one polarization output channel.
Step S224, retraining the preset Hash coding model based on the optimized polarization loss function until the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value;
in this embodiment, based on the optimized polarization loss function, the training of the preset hash coding model is performed again until the training hamming distance is less than or equal to the preset hamming distance threshold, specifically, the to-be-hashed training matrix corresponding to the training data is obtained again, and based on the to-be-hashed training matrix obtained again, the iterative training is performed again on the optimized preset hash coding model corresponding to the polarization loss function, so as to continuously optimize the polarization loss function until the training hamming distance is less than or equal to the preset hamming distance threshold.
Step S225, if the training hamming distance is less than or equal to the preset hamming distance threshold, determining that the preset hash coding model reaches the preset iteration end condition, and using the preset hash coding model as the target hash coding model.
In this embodiment, if the training hamming distance is less than or equal to the preset hamming distance threshold, it is determined that the preset hash coding model reaches the preset iteration end condition, and the preset hash coding model is used as the target hash coding model, specifically, if the training hamming distance is less than or equal to the preset hamming distance threshold, it is determined that the preset hash coding model reaches the preset iteration end condition, that is, a polarization loss function corresponding to each polarization output channel in the preset hash coding model converges, and the preset hash coding model is used as the target hash coding model.
Step S30, based on the target hash coding model and the update data, updating the random leaf node layer of the mercker tree to be updated, and obtaining the target mercker tree.
In this embodiment, based on the target hash coding model and the update data, a random leaf node layer of the to-be-updated tacle tree is updated to obtain the target tacle tree, specifically, new random leaf nodes corresponding to the new data blocks are generated, the new data blocks are input into the target hash coding model, hash coding values are output, further, based on the output values of the new models, target parent nodes of the new data blocks are determined, and the new random leaf nodes are connected to the corresponding target parent nodes to obtain the target tacle tree.
Further, the model parameters of the target hash coding model are sent to each merck tree user, and each merck tree user can construct each target merck tree based on the model parameters.
Wherein the update data comprises a newly added data block,
the step of updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the update data to obtain the target Mercker tree comprises the following steps:
step S31, generating a target random leaf node corresponding to the newly added data block, and inputting the newly added data block into the target hash coding model to obtain an output hash coding value corresponding to the newly added data block;
in this embodiment, it should be noted that the target random leaf node is a mercker tree node that stores the new added data block, and the update data at least includes one new added data block.
Generating target random leaf nodes corresponding to the newly added data blocks, inputting the newly added data blocks into the target hash coding model, and obtaining output hash coding values corresponding to the newly added data blocks, specifically, generating target random leaf nodes corresponding to each newly added data block, inputting each newly added data block into the target hash coding model, and performing hash coding on each newly added data block respectively, so as to obtain output hash coding values corresponding to each newly added data block, wherein newly added data blocks belonging to the same data category correspond to the same output hash coding value.
Step S32, matching a target father node corresponding to the target random leaf node based on the output hash code value;
in this embodiment, it should be noted that one target parent node corresponds to one target hash code value.
And matching target father nodes corresponding to the target random leaf nodes based on the output hash code values, specifically, comparing each output hash code value with each target hash code value, determining a selected target hash code value which is the same as each output hash code value in each target hash code value, and taking the tachr tree node corresponding to each selected target hash code value as the target father node corresponding to each target random leaf node.
Step S33, updating the random leaf node layer based on the target random leaf node and the target parent node, and obtaining the target merck tree.
In this embodiment, the random leaf node layer is updated based on the target random leaf node and the target parent node to obtain the target tachr tree, and specifically, each target random leaf node is connected to a corresponding target parent node respectively to update the random leaf node layer to obtain the target tachr tree.
In this embodiment, the target tachr code model is obtained by obtaining update data and the tachr tree to be updated, determining an original data set and a preset hash code model corresponding to the tachr tree to be updated, further determining a target data set based on the update data and the original data set, further training and updating the preset hash code model based on the target data set, and further updating a random leaf node layer of the tachr tree to be updated based on the target hash code model and the update data, thereby obtaining the target tachr tree. That is, the embodiment provides a to-be-updated merck tree constructed based on a preset hash coding model, and further when the to-be-updated merck tree needs to be updated, update data and an original data set corresponding to the to-be-updated merck tree are obtained, and further training and updating are performed on the preset hash coding model based on the update data and the original data set, so as to obtain a target hash coding model, and further, based on the target hash coding model and the update data, a random leaf node layer of the to-be-updated merck tree is updated, so as to obtain the target merck tree. That is, in the embodiment, when updating the merck tree to be updated, no matter how many data on the random leaf nodes are updated by the merck tree, only one training update needs to be performed on the preset hash coding model, and then based on the Hash coding model obtained by training and updating, the random leaf node layer of the Mercker tree to be updated is updated, the updating of the Mercker tree to be updated can be completed, and compared with the current updating method of the Mercker tree, when the random leaf node of the Mercker tree needs to be updated, the update of the Mercker tree to be updated can be completed only by updating the random leaf node layer of the Mercker tree, all the nodes of the tree branch where the random leaf node is located do not need to be updated, further, the calculation amount during the update of the Mercker tree is reduced, and further the calculation complexity during the update of the Mercker tree is reduced, so that the technical problem of low calculation efficiency during the update of the Mercker tree is solved.
Further, referring to fig. 3, based on the first embodiment in the present application, in another embodiment of the present application, the original data set includes one or more original data blocks and a target hash code value corresponding to each of the original data blocks, the to-be-updated merkel tree includes one or more random leaf nodes, one or more intermediate nodes, and a root node,
before the step of obtaining the update data and the Merck tree to be updated and determining the original data set and the preset Hash coding model corresponding to the Merck tree to be updated, the Merck tree updating method comprises the following steps:
step B10, generating random leaf nodes corresponding to the original data blocks, wherein one original data block corresponds to one random leaf node;
in this embodiment, it should be noted that the random leaf node is a mercker tree node that stores the original data block.
Step B20, generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
In this embodiment, each of the intermediate nodes includes one or more first-tier intermediate nodes and one or more second-tier intermediate nodes.
Generating each intermediate node and the root node based on each target hash code value and the preset hash code model, specifically, generating a first layer of intermediate nodes corresponding to each target hash code value, wherein the first layer of intermediate nodes is a tachr tree node for storing the target hash code value, and one target hash code value corresponds to one first layer of intermediate nodes, and then inputting each target hash code value into the preset hash code model, and performing hash coding on each target hash code value respectively to obtain a second layer of hash code value corresponding to each target hash code value, wherein one second layer of hash code value belongs to one or more target hash code values of the same data category, and then generating a second layer of intermediate nodes corresponding to each second layer of hash code value, and then using the second layer of hash code value as the input of the preset hash code model, and continuing to generate intermediate nodes of other layers until the preset hash coding model outputs a single hash coding value, and generating a root node corresponding to the single hash coding value so as to complete the construction of the Mercker tree to be updated.
Wherein each of the intermediate nodes comprises one or more first-tier intermediate nodes and one or more upper-tier intermediate nodes;
the step of generating each intermediate node and the root node based on each target hash value and the preset hash coding model includes:
step B21, generating a first-layer intermediate node corresponding to each target hash code value, where one target hash code value corresponds to one first-layer intermediate node;
in this embodiment, it should be noted that the first-layer intermediate node is a merkel tree node that stores the target hash code value.
And B22, circularly generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
In this embodiment, based on the preset hash coding model, upper-layer intermediate nodes corresponding to each first-layer intermediate node are cyclically generated until the root node is obtained, specifically, a target hash coding value corresponding to each first-layer intermediate node is input into the preset hash coding model, a second-layer hash coding value corresponding to each target hash coding value is output, and then a second-layer intermediate node corresponding to each second-layer hash coding value is generated, further, based on the preset hash coding model, intermediate nodes of other layers are continuously generated until the preset hash coding model outputs a single hash coding value, and a root node corresponding to the single hash coding value is generated, so as to complete the construction of the to-be-updated tacle tree.
In this embodiment, first, a random leaf node corresponding to each original data block is generated, where one original data block corresponds to one random leaf node, and then each intermediate node and the root node are generated based on each target hash code value and the preset hash code model. That is, the present embodiment provides a method for constructing a merck tree based on a preset hash coding model, that is, first generating random leaf nodes corresponding to each original data block, and further generating intermediate nodes corresponding to each target hash coding value, and further, based on the preset hash coding model and each target hash coding value, generating remaining intermediate nodes and root nodes, and further completing the construction of the merck tree to be updated, so that, because the present embodiment can directly generate hash coding values corresponding to input data blocks based on the preset hash coding model, it is not necessary to perform complex hash transformation on each input data block, and further, the construction efficiency of the merck tree is improved, further, when updating the merck tree to be updated, it is only necessary to perform one training update on the preset hash coding model of the merck tree to be updated, the updating of the Merck tree to be updated can be completed, the situation that all nodes of a tree branch where random leaf nodes are located need to be updated when the Merck tree to be updated is avoided, the calculated amount during the Merck tree updating is further reduced, and further, the calculation complexity during the Merck tree updating is reduced, so that a foundation is laid for solving the technical problem of low calculation efficiency during the Merck tree updating.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the mercker tree updating apparatus may include: aprocessor 1001, such as a CPU, amemory 1005, and acommunication bus 1002. Thecommunication bus 1002 is used for realizing connection communication between theprocessor 1001 and thememory 1005. Thememory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). Thememory 1005 may alternatively be a memory device separate from theprocessor 1001 described above.
Optionally, the mercker tree updating apparatus may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the configuration of the merkel tree updating apparatus shown in fig. 4 does not constitute a limitation of the merkel tree updating apparatus and may include more or less components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 4, thememory 1005, which is a type of computer storage medium, may include an operating system, a network communication module, and a mercker tree update program. The operating system is a program that manages and controls the mercker tree updating device hardware and software resources, supporting the running of the mercker tree updating program as well as other software and/or programs. The network communication module is used to enable communication between the various components within thememory 1005, as well as with other hardware and software in the Mercker tree update system.
In the mercker tree updating apparatus shown in fig. 4, theprocessor 1001 is configured to execute the mercker tree updating program stored in thememory 1005, and implement the steps of the mercker tree updating method described in any one of the above.
The specific implementation of the merkel tree updating apparatus in the present application is substantially the same as the embodiments of the merkel tree updating method described above, and will not be described herein again.
The embodiment of the present application further provides a merkel tree updating apparatus, where the merkel tree updating apparatus is applied to merkel tree updating equipment, and the merkel tree updating apparatus includes:
the determining module is used for acquiring the updating data and the Merck tree to be updated, and determining an original data set and a preset Hash coding model corresponding to the Merck tree to be updated;
the training module is used for training and updating the preset Hash coding model based on the updating data and the original data set to obtain a target Hash coding model;
and the updating module is used for updating the random leaf node layer of the Mercker tree to be updated based on the target Hash coding model and the updating data to obtain the target Mercker tree.
Optionally, the update module includes:
the hash coding submodule is used for generating a target random leaf node corresponding to the newly added data block, inputting the newly added data block into the target hash coding model and obtaining an output hash coding value corresponding to the newly added data block;
the matching submodule is used for matching a target father node corresponding to the target random leaf node based on the output hash code value;
and the updating submodule is used for updating the random leaf node layer based on the target random leaf node and the target father node to obtain the target Mercker tree.
Optionally, the training module comprises:
the extraction submodule is used for determining a target data set based on the updated data and the original data set and determining a data category set corresponding to the target data set;
and the iterative training submodule is used for performing iterative training on the preset Hash coding model based on the training data and the target Hash coding value so as to optimize a polarization loss function corresponding to the preset Hash coding model until the preset Hash coding model reaches a preset iteration ending condition, and obtaining the target Hash coding model.
Optionally, the iterative training submodule includes:
a hash coding unit, configured to input the training data into the preset hash coding model, so as to perform hash coding on the training data based on the polarization loss function, and obtain an initial hash coding value;
the comparison unit is used for calculating a training Hamming distance between the initial Hash code value and the target Hash code value and comparing the training Hamming distance with a preset Hamming distance threshold value;
a first determination unit, configured to determine that the preset hash coding model does not reach the preset iteration end condition if the training hamming distance is greater than the preset hamming distance threshold, and optimize the polarization loss function based on the initial hash coding value;
a retraining unit, configured to retrain the preset hash coding model based on the optimized polarization loss function until the training hamming distance is less than or equal to the preset hamming distance threshold;
and the second judging unit is used for judging that the preset Hash coding model reaches the preset iteration ending condition if the training Hamming distance is smaller than or equal to the preset Hamming distance threshold value, and taking the preset Hash coding model as the target Hash coding model.
Optionally, the hash encoding unit includes:
the Hash subunit is used for inputting the training data into the preset Hash coding model, and carrying out Hash on the training data to obtain a training Hash result;
the polarising subunit is used for polarising the training hash result based on the polarization loss function to obtain a polarising result;
a determining subunit, configured to determine the initial hash code value based on the polarization result.
Optionally, the merkel tree updating apparatus further includes:
a first generation module, configured to generate a random leaf node corresponding to each original data block, where one original data block corresponds to one random leaf node;
and the second generation module is used for generating each intermediate node and the root node based on each target hash code value and the preset hash code model.
Optionally, the second generating module includes:
the generation submodule is used for generating first-layer intermediate nodes corresponding to the target Hash code values, wherein one target Hash code value corresponds to one first-layer intermediate node;
and the cyclic generation submodule is used for cyclically generating upper-layer intermediate nodes corresponding to the first-layer intermediate nodes based on the preset Hash coding model until the root nodes are obtained.
The embodiments of the merkel tree updating apparatus of the present application are substantially the same as the embodiments of the merkel tree updating method described above, and are not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

Translated fromChinese
1.一种默克尔树更新方法,其特征在于,所述默克尔树更新方法包括:1. a Merkle tree update method, is characterized in that, described Merkle tree update method comprises:获取更新数据和待更新默克尔树,并确定所述待更新默克树对应的原数据集和预设哈希编码模型;Obtain the update data and the Merkle tree to be updated, and determine the original data set and the preset hash coding model corresponding to the Merkle tree to be updated;基于所述更新数据和所述原数据集,对所述预设哈希编码模型进行训练更新,获得目标哈希编码模型;Based on the updated data and the original data set, training and updating the preset hash coding model to obtain a target hash coding model;基于所述目标哈希编码模型和所述更新数据,更新所述待更新默克尔树的随机叶子节点层,获得目标默克尔树。Based on the target hash coding model and the update data, the random leaf node layer of the Merkle tree to be updated is updated to obtain the target Merkle tree.2.如权利要求1所述默克尔树更新方法,其特征在于,所述更新数据包括新增数据块,2. Merkle tree update method as claimed in claim 1, is characterized in that, described update data comprises newly added data block,所述基于所述目标哈希编码模型和所述更新数据,更新所述待更新默克尔树的随机叶子节点层,获得目标默克尔树的步骤包括:The step of updating the random leaf node layer of the Merkle tree to be updated based on the target hash coding model and the update data, the steps of obtaining the target Merkle tree include:生成所述新增数据块对应的目标随机叶子节点,并将所述新增数据块输入所述目标哈希编码模型,获得所述新增数据块对应的输出哈希编码值;generating a target random leaf node corresponding to the newly added data block, and inputting the newly added data block into the target hash coding model to obtain an output hash code value corresponding to the newly added data block;基于所述输出哈希编码值,匹配所述目标随机叶子节点对应的目标父节点;matching the target parent node corresponding to the target random leaf node based on the output hash code value;基于所述目标随机叶子节点和所述目标父节点,更新所述随机叶子节点层,获得所述目标默克尔树。Based on the target random leaf node and the target parent node, the random leaf node layer is updated to obtain the target Merkle tree.3.如权利要求1所述默克尔树更新方法,其特征在于,所述基于所述更新数据和所述原数据集,对所述预设哈希编码模型进行训练更新,获得目标哈希编码模型的步骤包括:3. Merkle tree update method as claimed in claim 1, is characterized in that, described based on described update data and described original data set, described preset hash coding model is trained and updated, obtains target hash The steps to encode the model include:基于所述更新数据和所述原数据集,确定目标数据集,并确定所述目标数据集对应的数据类别集合;Based on the updated data and the original data set, a target data set is determined, and a data category set corresponding to the target data set is determined;获取所述数据类别集合对应的目标哈希编码值集合,并确定所述目标数据集对应的训练数据和所述目标哈希编码值集合对应的目标哈希编码值;Obtain the target hash code value set corresponding to the data category set, and determine the training data corresponding to the target data set and the target hash code value corresponding to the target hash code value set;基于所述训练数据和所述目标哈希编码值,对所述预设哈希编码模型进行迭代训练,以优化所述预设哈希编码模型对应的极化损失函数,直至所述预设哈希编码模型达到预设迭代结束条件,获得所述目标哈希编码模型。Based on the training data and the target hash coding value, the preset hash coding model is iteratively trained to optimize the polarization loss function corresponding to the preset hash coding model, until the preset hash coding model is The target hash coding model is obtained when the hash coding model reaches a preset iteration end condition.4.如权利要求3所述默克尔树更新方法,其特征在于,所述基于所述训练数据和所述目标哈希编码值,对所述预设哈希编码模型进行迭代训练,以优化所述预设哈希编码模型对应的极化损失函数,直至所述预设哈希编码模型达到预设迭代结束条件,获得所述目标哈希编码模型的步骤包括:4. The Merkle tree updating method according to claim 3, wherein the preset hash coding model is iteratively trained based on the training data and the target hash coding value to optimize The polarization loss function corresponding to the preset hash coding model, until the preset hash coding model reaches a preset iteration end condition, the step of obtaining the target hash coding model includes:将所述训练数据输入所述预设哈希编码模型,以基于所述极化损失函数,对所述训练数据进行哈希编码,获得初始哈希编码值;inputting the training data into the preset hash coding model to perform hash coding on the training data based on the polarization loss function to obtain an initial hash coding value;计算所述初始哈希编码值和所述目标哈希编码值之间的训练汉明距离,并将所述训练汉明距离与预设汉明距离阀值进行比对;Calculate the training Hamming distance between the initial hash code value and the target hash code value, and compare the training Hamming distance with a preset Hamming distance threshold;若所述训练汉明距离大于所述预设汉明距离阀值,则判定所述预设哈希编码模型未达到所述预设迭代结束条件,并基于所述初始哈希编码值优化所述极化损失函数;If the training Hamming distance is greater than the preset Hamming distance threshold, it is determined that the preset hash coding model does not meet the preset iteration end condition, and the initial hash coding value is optimized for the Polarization loss function;基于优化后的所述极化损失函数,重新训练所述预设哈希编码模型,直至所述训练汉明距离小于或者等于所述预设汉明距离阀值;Based on the optimized polarization loss function, retrain the preset hash coding model until the training Hamming distance is less than or equal to the preset Hamming distance threshold;若所述训练汉明距离小于或者等于所述预设汉明距离阀值,则判定所述预设哈希编码模型达到所述预设迭代结束条件,并将所述预设哈希编码模型作为所述目标哈希编码模型。If the training Hamming distance is less than or equal to the preset Hamming distance threshold, it is determined that the preset hash coding model reaches the preset iteration end condition, and the preset hash coding model is used as the target hash coding model.5.如权利要求4所述默克尔树更新方法,其特征在于,所述将所述训练数据输入所述预设哈希编码模型,以基于所述极化损失函数,对所述训练数据进行哈希编码,获得初始哈希编码值的步骤包括:5 . The Merkle tree updating method according to claim 4 , wherein the training data is input into the preset hash coding model, so as to update the training data based on the polarization loss function. 6 . The steps of performing hash encoding to obtain an initial hash-encoded value include:将所述训练数据输入所述预设哈希编码模型,对所述训练数据进行哈希,获得训练哈希结果;Inputting the training data into the preset hash coding model, and hashing the training data to obtain a training hash result;基于所述极化损失函数,对所述训练哈希结果进行极化,获得极化结果;Based on the polarization loss function, the training hash result is polarized to obtain a polarization result;基于所述极化结果,确定所述初始哈希编码值。Based on the polarization result, the initial hash code value is determined.6.如权利要求1所述默克尔树更新方法,其特征在于,所述原数据集包括一个或者多个原数据块、以及与各所述原数据块对应的目标哈希编码值,所述待更新默克尔树包括一个或者多个随机叶子节点、一个或者多个中间节点、以及根节点,6. The Merkle tree updating method according to claim 1, wherein the original data set comprises one or more original data blocks and a target hash code value corresponding to each of the original data blocks, The Merkle tree to be updated includes one or more random leaf nodes, one or more intermediate nodes, and a root node,在所述获取更新数据和待更新默克尔树,并确定所述待更新默克树对应的原数据集和预设哈希编码模型步骤之前,所述默克尔树更新方法还包括:Before the step of obtaining the update data and the Merkle tree to be updated, and determining the original data set and the preset hash coding model corresponding to the Merkle tree to be updated, the Merkle tree updating method further includes:生成各所述原数据块对应的随机叶子节点,其中,一所述原数据块对应一所述随机叶子节点;generating random leaf nodes corresponding to each of the original data blocks, wherein one of the original data blocks corresponds to one of the random leaf nodes;基于各所述目标哈希编码值和所述预设哈希编码模型,生成各所述中间节点和所述根节点。Each of the intermediate nodes and the root node is generated based on each of the target hash code values and the preset hash code model.7.如权利要求6所述默克尔树更新方法,其特征在于,各所述中间节点包括一个或者多个第一层中间节点、以及一个或者多个上层中间节点;7. The Merkle tree updating method according to claim 6, wherein each of the intermediate nodes comprises one or more first-level intermediate nodes and one or more upper-level intermediate nodes;所述基于各所述目标哈希值和所述预设哈希编码模型,生成各所述中间节点和所述根节点的步骤包括:The step of generating each of the intermediate nodes and the root node based on each of the target hash values and the preset hash coding model includes:生成各所述目标哈希编码值对应的第一层中间节点,其中,一所述目标哈希编码值对应一所述第一层中间节点;generating a first-layer intermediate node corresponding to each target hash code value, wherein one of the target hash code values corresponds to one of the first-layer intermediate nodes;基于所述预设哈希编码模型,循环生成各所述第一层中间节点对应的上层中间节点,直至获得所述根节点。Based on the preset hash coding model, upper-layer intermediate nodes corresponding to each of the first-layer intermediate nodes are cyclically generated until the root node is obtained.8.一种默克尔树更新装置,其特征在于,所述默克尔树更新装置包括:8. A Merkle tree updating device, wherein the Merkle tree updating device comprises:确定模块,用于获取更新数据和待更新默克尔树,并确定所述待更新默克树对应的原数据集和预设哈希编码模型;A determination module is used to obtain the update data and the Merkle tree to be updated, and to determine the original data set and the preset hash coding model corresponding to the Merkle tree to be updated;训练模块,用于基于所述更新数据和所述原数据集,对所述预设哈希编码模型进行训练更新,获得目标哈希编码模型;A training module, configured to perform training and update on the preset hash coding model based on the updated data and the original data set to obtain a target hash coding model;更新模块,用于基于所述目标哈希编码模型和所述更新数据,更新所述待更新默克尔树的随机叶子节点层,获得目标默克尔树。An update module, configured to update the random leaf node layer of the Merkle tree to be updated based on the target hash coding model and the update data to obtain a target Merkle tree.9.一种默克尔树更新设备,其特征在于,所述默克尔树更新设备包括:存储器、处理器以及存储在存储器上的用于实现所述默克尔树更新方法的程序,9. A Merkle tree updating device, characterized in that, the Merkle tree updating device comprises: a memory, a processor and a program stored on the memory for realizing the Merkle tree updating method,所述存储器用于存储实现默克尔树更新方法的程序;Described memory is used for storing the program that realizes Merkle tree update method;所述处理器用于执行实现所述默克尔树更新方法的程序,以实现如权利要求1至7中任一项所述默克尔树更新方法的步骤。The processor is configured to execute a program for implementing the Merkle tree update method, so as to implement the steps of the Merkle tree update method according to any one of claims 1 to 7.10.一种可读存储介质,其特征在于,所述可读存储介质上存储有实现默克尔树更新方法的程序,所述实现默克尔树更新方法的程序被处理器执行以实现如权利要求1至7中任一项所述默克尔树更新方法的步骤。10. A readable storage medium, characterized in that, a program for realizing the Merkle tree updating method is stored on the readable storage medium, and the program for realizing the Merkle tree updating method is executed by a processor to realize the following: The steps of the Merkle tree updating method described in any one of claims 1 to 7.
CN202010453608.5A2020-05-222020-05-22Mercker tree updating method, device, equipment and readable storage mediumActiveCN111625258B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202010453608.5ACN111625258B (en)2020-05-222020-05-22Mercker tree updating method, device, equipment and readable storage medium
PCT/CN2021/093397WO2021233182A1 (en)2020-05-222021-05-12Merkle tree updating method, apparatus and device, and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010453608.5ACN111625258B (en)2020-05-222020-05-22Mercker tree updating method, device, equipment and readable storage medium

Publications (2)

Publication NumberPublication Date
CN111625258Atrue CN111625258A (en)2020-09-04
CN111625258B CN111625258B (en)2021-08-27

Family

ID=72260731

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010453608.5AActiveCN111625258B (en)2020-05-222020-05-22Mercker tree updating method, device, equipment and readable storage medium

Country Status (2)

CountryLink
CN (1)CN111625258B (en)
WO (1)WO2021233182A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112380209A (en)*2020-10-292021-02-19华东师范大学Block chain multi-channel state data-oriented structure tree aggregation method
CN112559518A (en)*2020-12-102021-03-26杭州趣链科技有限公司Merck tree updating method, terminal device and storage medium
CN113377979A (en)*2021-06-092021-09-10中国国家铁路集团有限公司Method for comparing, generating and optimizing train running scheme based on Mercker tree
WO2021233182A1 (en)*2020-05-222021-11-25深圳前海微众银行股份有限公司Merkle tree updating method, apparatus and device, and readable storage medium
WO2022063223A1 (en)*2020-09-282022-03-31华为技术有限公司Data verification method, apparatus, and system
CN114610335A (en)*2022-03-032022-06-10罗普特科技集团股份有限公司Method, system and storage medium for processing deployment package based on abstract difference tree

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117218759B (en)*2023-10-272025-08-19杭州电子科技大学Verifiable random number shaking method based on Merker tree and sequencing

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103218574A (en)*2013-04-092013-07-24电子科技大学Hash tree-based data dynamic operation verifiability method
CN105868369A (en)*2016-03-302016-08-17电子科技大学Data model verification system and method based on Merkle tree structure
CN106845280A (en)*2017-03-142017-06-13广东工业大学A kind of Merkle Hash trees cloud data integrity auditing method and system
CN106897368A (en)*2017-01-162017-06-27西安电子科技大学Database update operating method is set and its be can verify that in the summation of Merkle Hash
WO2018109260A1 (en)*2016-12-162018-06-21Nokia Technologies OySecure document management
CN109829549A (en)*2019-01-302019-05-31宁波大学Hash learning method and its unsupervised online Hash learning method based on the tree that develops
CN110033264A (en)*2019-01-312019-07-19阿里巴巴集团控股有限公司Merkel tree corresponding to building block and simple payment verification method and device
CN110704664A (en)*2019-08-282020-01-17宁波大学Hash retrieval method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105512273A (en)*2015-12-032016-04-20中山大学Image retrieval method based on variable-length depth hash learning
CN110134803B (en)*2019-05-172020-12-11哈尔滨工程大学 A fast retrieval method for image data based on hash learning
CN110688501B (en)*2019-08-282022-04-05宁波大学 A Hash Retrieval Method Based on Deep Learning Fully Convolutional Networks
CN111625258B (en)*2020-05-222021-08-27深圳前海微众银行股份有限公司Mercker tree updating method, device, equipment and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103218574A (en)*2013-04-092013-07-24电子科技大学Hash tree-based data dynamic operation verifiability method
CN105868369A (en)*2016-03-302016-08-17电子科技大学Data model verification system and method based on Merkle tree structure
WO2018109260A1 (en)*2016-12-162018-06-21Nokia Technologies OySecure document management
CN106897368A (en)*2017-01-162017-06-27西安电子科技大学Database update operating method is set and its be can verify that in the summation of Merkle Hash
CN106845280A (en)*2017-03-142017-06-13广东工业大学A kind of Merkle Hash trees cloud data integrity auditing method and system
CN109829549A (en)*2019-01-302019-05-31宁波大学Hash learning method and its unsupervised online Hash learning method based on the tree that develops
CN110033264A (en)*2019-01-312019-07-19阿里巴巴集团控股有限公司Merkel tree corresponding to building block and simple payment verification method and device
CN110704664A (en)*2019-08-282020-01-17宁波大学Hash retrieval method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
风之舞555: "Merkle Tree学习", 《博客园:HTTPS://WWW.CNBLOGS.COM/FENGZHIWU/P/5524324.HTML》*

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2021233182A1 (en)*2020-05-222021-11-25深圳前海微众银行股份有限公司Merkle tree updating method, apparatus and device, and readable storage medium
WO2022063223A1 (en)*2020-09-282022-03-31华为技术有限公司Data verification method, apparatus, and system
CN112380209A (en)*2020-10-292021-02-19华东师范大学Block chain multi-channel state data-oriented structure tree aggregation method
CN112559518A (en)*2020-12-102021-03-26杭州趣链科技有限公司Merck tree updating method, terminal device and storage medium
CN113377979A (en)*2021-06-092021-09-10中国国家铁路集团有限公司Method for comparing, generating and optimizing train running scheme based on Mercker tree
CN113377979B (en)*2021-06-092023-09-19中国国家铁路集团有限公司 An optimization method for comparison and generation of train operation plans based on Merkel tree
CN114610335A (en)*2022-03-032022-06-10罗普特科技集团股份有限公司Method, system and storage medium for processing deployment package based on abstract difference tree
CN114610335B (en)*2022-03-032025-07-18罗普特科技集团股份有限公司Deployment package processing method, system and storage medium based on abstract difference tree

Also Published As

Publication numberPublication date
CN111625258B (en)2021-08-27
WO2021233182A1 (en)2021-11-25

Similar Documents

PublicationPublication DateTitle
CN111625258B (en)Mercker tree updating method, device, equipment and readable storage medium
CN111695697B (en) Multi-party joint decision tree construction method, equipment and readable storage medium
CN109299728B (en)Sample joint prediction method, system and medium based on construction of gradient tree model
CN111626408B (en) Hash coding method, apparatus, device and readable storage medium
US8538938B2 (en)Interactive proof to validate outsourced data stream processing
WO2021083276A1 (en)Method, device, and apparatus for combining horizontal federation and vertical federation, and medium
CN111105029B (en) Neural network generation method, generation device and electronic device
JP7405933B2 (en) Quantum channel classical capacity estimation method and device, electronic equipment and media
CN111967609A (en)Model parameter verification method, device and readable storage medium
CN111898750A (en) A neural network model compression method and device based on evolutionary algorithm
WO2021232747A1 (en)Data right determination method and device, and readable storage medium
CN111612080B (en)Model interpretation method, device and readable storage medium
WO2021233183A1 (en)Neural network verification method, apparatus and device, and readable storage medium
CN114428907B (en) Information search method, device, electronic device and storage medium
CN103914527B (en)Graphic image recognition and matching method based on genetic programming algorithms of novel coding modes
CN118278519B (en)Knowledge graph completion method and related equipment
CN112884120A (en)Graph neural network representation architecture
CN114239237A (en) A system and method for generating distribution network simulation scene supporting digital twin
CN114298319A (en)Method and device for determining joint learning contribution value, electronic equipment and storage medium
CN115310558A (en)Big data analysis method and AI analysis system for cloud service abnormity optimization
CN114462582A (en)Data processing method, device and equipment based on convolutional neural network model
CN119647599A (en) Data processing method, device, equipment and storage medium based on large language model
CN118658032A (en) Video generation model training method, device, electronic device and storage medium
CN111445030A (en)Federal modeling method, device and readable storage medium based on stepwise regression method
CN118839750A (en)Clustering federation learning method based on data characterization optimization and related equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp