Movatterモバイル変換


[0]ホーム

URL:


CN113468275A - Data importing method and device of graph database, storage medium and electronic equipment - Google Patents

Data importing method and device of graph database, storage medium and electronic equipment
Download PDF

Info

Publication number
CN113468275A
CN113468275ACN202110858726.9ACN202110858726ACN113468275ACN 113468275 ACN113468275 ACN 113468275ACN 202110858726 ACN202110858726 ACN 202110858726ACN 113468275 ACN113468275 ACN 113468275A
Authority
CN
China
Prior art keywords
data
file
graph
node
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110858726.9A
Other languages
Chinese (zh)
Other versions
CN113468275B (en
Inventor
杨福星
周明伟
朱林浩
俞毅
沈秋军
何林强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co LtdfiledCriticalZhejiang Dahua Technology Co Ltd
Priority to CN202110858726.9ApriorityCriticalpatent/CN113468275B/en
Publication of CN113468275ApublicationCriticalpatent/CN113468275A/en
Application grantedgrantedCritical
Publication of CN113468275BpublicationCriticalpatent/CN113468275B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a data importing method and device of a graph database, a storage medium and electronic equipment. The method comprises the following steps: determining a data file to be uploaded, wherein the data file comprises: a data mapping file of point data and edge data corresponding to the graph data; processing the data mapping file through a master node to obtain a data block distribution list corresponding to the data mapping file and a slave node; the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list and distributes processing tasks corresponding to the data blocks; and under the condition that the slave node finishes the distributed concurrent processing on the data block, the master node determines boundary data to be imported into the database according to the processing result of the distributed concurrent processing. The problems that bottleneck exists in the performance of a graph database when large-data-volume graph data are imported, the importing efficiency of side data is low and the like in the prior art are solved.

Description

Data importing method and device of graph database, storage medium and electronic equipment
Technical Field
The present invention relates to the field of communications, and in particular, to a method and an apparatus for importing data from a graph database, a storage medium, and an electronic device.
Background
With the arrival of the era of interconnection of big data and everything, the data has a tendency of eruption, and various complex relationships exist among the data. The traditional relational database is used for processing the data of the complex relations, the phenomenon of insufficient performance is presented, and the requirements of market users cannot be well met. Graph databases are generated based on graph theory, naturally supporting fast processing of complex relationships. However, it becomes critical how to import a large amount of complex data that exists in point-edge associations into a graph database.
In the related art, import or export information is generated through a parameter configuration module according to an operation instruction of a user, a data acquisition module acquires target data according to the information of the parameter configuration module, a data conversion module performs data format conversion (conversion between a point edge format and an attribute data format of graph data) on the acquired data, and an import and export module imports or exports the data converted by the data conversion module into a graph database or exports the data to a destination according to the operation instruction of the user. However, the import/export flow focuses on the data format conversion process, and the case of importing a point-side file into a graph database and importing a large-scale data into a graph database cannot be realized. In addition, a migration method is provided for interactive data migration from relational data to graph data, an entity contact ER model is constructed according to relational database metadata, then the graph model is mapped, and finally data migration is completed. However, how to simply and conveniently perform data migration is the core of the method, and a scene of importing a graph database by large data volume is not involved, so that the importing performance has defects.
In view of the above problems, in the prior art, there are bottlenecks in performance of a graph database when importing large-amount graph data, and low efficiency in importing edge data, and no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a data importing method and device of a graph database, a storage medium and electronic equipment, and aims to at least solve the problems that the performance of the graph database is bottleneck when large-data-volume graph data is imported, the importing efficiency of side data is low and the like in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a data importing method of a graph database, including: determining a data file to be uploaded, wherein the data file comprises: a data mapping file of point data and edge data corresponding to the graph data; processing the data mapping file through a master node to obtain a data block distribution list corresponding to the data mapping file and a slave node; the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list and distributes processing tasks corresponding to the data blocks; and under the condition that the slave node finishes the distributed concurrent processing on the data block, the master node determines boundary data imported by a database according to the processing result of the distributed concurrent processing.
In an exemplary embodiment, the determining, by the master node, boundary data imported from a map database according to the processing result of the distributed concurrent processing includes: the master node acquires the boundary offset in the processing result of the data block on each slave node; and performing summary operation on the boundary offset through a preset algorithm to obtain the imported boundary data corresponding to the boundary offset.
In an exemplary embodiment, the distributed concurrent processing of the data block by the slave node comprises: under the condition that the data mapping file is determined to be a data mapping file corresponding to point data, the graph database returns a vertex ID to the slave node to indicate that the slave node writes the vertex ID in a local cache, wherein the slave node is used for importing the point data and/or edge data of the graph data corresponding to the data file to be uploaded; and under the condition that the data mapping file is determined to be the data mapping file corresponding to the edge data, determining whether the IDs of the vertexes at two ends corresponding to the edge data exist in the local memory of the slave node so as to determine the confirmation mode of the edge data.
In an exemplary embodiment, determining whether end vertex IDs corresponding to the edge data exist in a local memory of the slave node to determine a confirmation mode of the edge data includes: under the condition that the ID of the vertexes at two ends corresponding to the edge data does not exist in the local memory of the slave node, carrying out distributed query on the ID of the vertex by other slave nodes corresponding to the master node; and under the condition that the vertex IDs at two ends corresponding to the edge data exist in the local memory of the slave node, directly writing the edge data according to the vertex IDs inquired from the cache of the local memory of the slave node.
In an exemplary embodiment, performing a distributed query on the vertex ID from other slave nodes corresponding to the master node includes: the main node acquires the information of the vertex IDs cached on the other slave nodes through a remote calling frame for inquiring; and when the query result corresponding to the query finds the vertex ID corresponding to the edge data, indicating the writing of the edge data by using the vertex ID on the other slave nodes.
In an exemplary embodiment, after the master node determines boundary data imported from a map database according to a processing result of the distributed concurrent processing when the slave node completes the distributed concurrent processing on the data block, the method further includes: the master node collects the offset and the boundary information of the data block determined on each slave node through a remote calling frame and sorts the boundary information; acquiring complete boundary data according to the offset and the boundary information; and under the condition that the master node acquires the complete boundary data, informing the slave node to import the complete boundary data.
In an exemplary embodiment, after determining the data file to be uploaded, the method further comprises: determining whether a graph name corresponding to a data file to be uploaded exists in a graph database, wherein the data file further comprises: the map metadata file corresponding to the map data, wherein the map name is used for indicating the name of the data file corresponding to the map data; and under the condition that the graph name does not exist in the graph database, creating a new graph data file in the graph database according to the graph name and the graph metadata file, and loading a target graph corresponding to the new graph data file.
According to another aspect of the embodiments of the present invention, there is also provided a method and apparatus for importing data from a graph database, including: the device comprises a determining module, a judging module and a processing module, wherein the determining module is used for determining a data file to be uploaded, and the data file comprises: a data mapping file of point data and edge data corresponding to the graph data; the processing module is used for processing the data mapping file through the main node to obtain a data block distribution list corresponding to the data mapping file and the slave node; the distribution module is used for distributing the data blocks to be processed corresponding to the slave nodes through the data block distribution list and distributing the processing tasks corresponding to the data blocks; and the importing module is used for determining boundary data imported by the database according to the processing result of the distributed concurrent processing when the slave node finishes the distributed concurrent processing on the data block.
In an exemplary embodiment, the importing module is further configured to obtain, by the master node, a boundary offset in a processing result of the data block on each slave node; and performing summary operation on the boundary offset through a preset algorithm to obtain the imported boundary data corresponding to the boundary offset.
In an exemplary embodiment, the importing module is further configured to, when it is determined that the data mapping file is a data mapping file corresponding to point data, instruct the slave node to write the vertex ID into a local cache by using the vertex ID returned by the graph database to indicate that the slave node writes the vertex ID into the local cache, where the slave node is configured to import point data and/or edge data of graph data corresponding to a data file to be uploaded; and under the condition that the data mapping file is determined to be the data mapping file corresponding to the edge data, determining whether the IDs of the vertexes at two ends corresponding to the edge data exist in the local memory of the slave node so as to determine the confirmation mode of the edge data.
In an exemplary embodiment, the importing module is further configured to perform distributed query on the vertex IDs from other slave nodes corresponding to the master node when the vertex IDs at two ends corresponding to the edge data do not exist in the local memory of the slave node; and under the condition that the vertex IDs at two ends corresponding to the edge data exist in the local memory of the slave node, directly writing the edge data according to the vertex IDs inquired from the cache of the local memory of the slave node.
In an exemplary embodiment, the importing module is further configured to obtain, by the master node through a remote invocation framework, information of vertex IDs cached in the other slave nodes for querying; and when the query result corresponding to the query finds the vertex ID corresponding to the edge data, indicating the writing of the edge data by using the vertex ID on the other slave nodes.
In an exemplary embodiment, the import module is further configured to summarize, by the master node, the offset and the boundary information of the data block determined on each slave node through a remote invocation framework, and sort the boundary information; acquiring complete boundary data according to the offset and the boundary information; and under the condition that the master node acquires the complete boundary data, informing the slave node to import the complete boundary data.
In an exemplary embodiment, the apparatus further includes: the creating module is used for determining whether the graph name corresponding to the data file to be uploaded exists in the graph database, wherein the data file further comprises: the map metadata file corresponding to the map data, wherein the map name is used for indicating the name of the data file corresponding to the map data; and under the condition that the graph name does not exist in the graph database, creating a new graph data file in the graph database according to the graph name and the graph metadata file, and loading a target graph corresponding to the new graph data file.
According to a further aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method in any of the method embodiments when executed.
According to yet another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, where the memory stores therein a computer program, and the processor is configured to execute the method in any one of the method embodiments described above by using the computer program.
In the embodiment of the present invention, a data file to be uploaded is determined, where the data file includes: a data mapping file of point data and edge data corresponding to the graph data; processing the data mapping file through the master node to obtain a data block distribution list corresponding to the data mapping file and the slave node; the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list and distributes processing tasks corresponding to the data blocks; and under the condition that the slave nodes finish the distributed concurrent processing on the data blocks, the master node determines the boundary data imported by the database according to the processing result of the distributed concurrent processing. That is to say, the data reading network IO overhead under the distributed situation is saved through the distributed import framework of the master-slave structure and the characteristic that each slave node processes the local block data block, the cost of querying a database when importing the side data is avoided by the vertex cache scheme when importing the point data and the distributed vertex query scheme when importing the side data, the import efficiency of importing the database into a large-scale data file is further improved, the problems that the performance of the database is bottleneck when importing large-data-volume graph data and the import efficiency of the side data is low in the prior art are solved, and the database data import capacity when importing large data volume is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a block diagram of a hardware configuration of a computer terminal of a data importing method of a graph database according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an application environment for data import for a graph database according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method of importing data from a graph database according to an embodiment of the present invention;
FIG. 4 is an interaction diagram illustrating invoking a tool for distributed concurrent import according to an alternative embodiment of the present invention;
FIG. 5 is a flowchart of a vertex ID cache lookup in accordance with an alternative embodiment of the present invention;
FIG. 6 is a schematic diagram of an apparatus for importing data from a graph database according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method provided by the embodiment of the application can be executed in a computer terminal, a mobile terminal or a similar operation device. Taking the example of being operated on a computer terminal, fig. 1 is a hardware block diagram of a computer terminal of a data importing method of a graph database according to an embodiment of the present invention. As shown in fig. 1, the computer terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally, atransmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the computer terminal. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration with equivalent functionality to that shown in FIG. 1 or with more functionality than that shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the data importing method of the graph database in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Thetransmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, thetransmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, thetransmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
According to an aspect of the embodiments of the present invention, a data importing method of a graph database is provided, and as an alternative implementation, the data importing method of the graph database may be applied to, but is not limited to, the environment shown in fig. 2.
Optionally, in this embodiment, the terminal device may be a terminal device configured with a target client, and may include, but is not limited to, at least one of the following: the system comprises a video monitoring ball machine, a video monitoring gun, a patrol robot, a Mobile phone (such as an Android Mobile phone and an iOS Mobile phone), a notebook computer, a tablet computer, a palm computer, an MID (Mobile Internet Devices), a PAD (personal digital assistant), a desktop computer, a smart television and the like. The target client may be a video client, an instant messaging client, a browser client, an educational client, etc. Such networks may include, but are not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The graph database may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
Optionally, as an optional implementation manner, as shown in fig. 3, the data importing method of the graph database includes:
step S202, determining a data file to be uploaded, wherein the data file comprises: a data mapping file of point data and edge data corresponding to the graph data;
step S204, processing the data mapping file through a master node to obtain a data block distribution list corresponding to the data mapping file and a slave node;
step S206, the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list, and distributes the processing tasks corresponding to the data blocks;
step S208, when the slave node completes the distributed concurrent processing on the data block, the master node determines boundary data imported from the map database according to a processing result of the distributed concurrent processing.
Through the steps, determining a data file to be uploaded, wherein the data file comprises: a data mapping file of point data and edge data corresponding to the graph data; processing the data mapping file through the master node to obtain a data block distribution list corresponding to the data mapping file and the slave node; the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list and distributes processing tasks corresponding to the data blocks; and under the condition that the slave nodes finish the distributed concurrent processing on the data blocks, the master node determines the boundary data imported by the database according to the processing result of the distributed concurrent processing. That is to say, the data reading network IO overhead under the distributed situation is saved through the distributed import framework of the master-slave structure and the characteristic that each slave node processes the local block data block, the cost of querying a database when importing the side data is avoided by the vertex cache scheme when importing the point data and the distributed vertex query scheme when importing the side data, the import efficiency of importing the database into a large-scale data file is further improved, the problems that the performance of the database is bottleneck when importing large-data-volume graph data and the import efficiency of the side data is low in the prior art are solved, and the database data import capacity when importing large data volume is effectively improved.
It should be noted that the master node is used to indicate a node in the graph database that can perform a write operation, and the slave node is a node in the graph database that is used to store data.
In an exemplary embodiment, the determining, by the master node, boundary data imported from the map database according to the processing result of the distributed concurrent processing includes: the main node acquires the boundary offset in the processing result of the data block on each slave node; and performing summary operation on the boundary offset through a preset algorithm to obtain imported boundary data corresponding to the boundary offset.
In short, after determining the boundary offset of each slave node, in order to quickly determine the corresponding boundary data, the boundary offset of each slave node corresponding to the master node is summed, and the boundary data corresponding to the boundary offset is determined.
In an exemplary embodiment, after determining the data file to be uploaded, the method further comprises: determining whether a graph name corresponding to a data file to be uploaded exists in a graph database, wherein the data file further comprises: the map metadata file corresponding to the map data, wherein the map name is used for indicating the name of the data file corresponding to the map data; and under the condition that the graph name does not exist in the graph database, creating a new graph data file in the graph database according to the graph name and the graph metadata file, and loading a target graph corresponding to the new graph data file.
As an alternative embodiment, the overall process of the present invention is as follows:
determining whether a graph name corresponding to a data file to be uploaded to a graph database exists in the graph database, wherein the data file comprises: map schema metadata file, data mapping file of corresponding point data and edge data of the map; under the condition that the graph name does not exist in the graph database, creating a new graph and a graph schema corresponding to the loading graph in the graph database according to the graph name and the graph schema metadata file; based on the block distribution characteristic of the HDFS and the idea of locally processing local block data, the master node distributes block blocks of a data file, the block blocks on a certain slave node are distributed to a certain slave node, and block data processing tasks are distributed to the slave nodes; processing the distributed block data blocks on the node from the node, caching the ID of the vertex into a memory when importing the point data, and directly acquiring the IDs of the vertexes at two ends from the memories of other slave nodes when importing the edge data so as to avoid querying a database; and the slave nodes collect the boundary offset of the block blocks to the master node, and the master node processes and imports the combined boundary data.
Further, the data mapping file is processed through the master node, a data block distribution list corresponding to the data mapping file and the slave node is determined according to a principle that the local block data is processed by the local node, a data processing task is distributed, and the slave node performs distributed concurrent data processing.
The vertex IDs are used to identify the vertices at the forefront of the left and right ends of the edge data, and each vertex ID is used to uniquely identify the vertex of the corresponding edge data.
In one exemplary embodiment, distributed concurrent processing of data blocks by slave nodes includes: under the condition that the data mapping file is determined to be a data mapping file corresponding to the point data, the graph database sends a vertex ID returned by the slave node to indicate that the slave node writes the vertex ID in the local cache, wherein the slave node is used for importing the point data and/or the edge data of the graph data corresponding to the data file to be uploaded; and under the condition that the data mapping file is determined to be the data mapping file corresponding to the edge data, determining whether the ID of the vertexes at two ends corresponding to the edge data exists in the local memory of the slave node or not so as to determine the confirmation mode of the edge data.
In an exemplary embodiment, determining whether end vertex IDs corresponding to the edge data exist in a local memory of the slave node to determine a confirmation mode of the edge data includes: under the condition that the ID of the vertexes at two ends corresponding to the edge data does not exist in the local memory of the slave node, carrying out distributed query on the ID of the vertex by other slave nodes corresponding to the master node; and under the condition that the vertex IDs at two ends corresponding to the edge data exist in the local memory of the slave node, directly writing the edge data according to the vertex IDs inquired from the cache of the local memory of the slave node.
In an exemplary embodiment, the performing distributed query on the vertex ID from other slave nodes corresponding to the master node includes: the main node acquires the information of the vertex IDs cached on the other slave nodes through a remote calling frame for inquiring; and when the query result corresponding to the query finds the vertex ID corresponding to the edge data, indicating the writing of the edge data by using the vertex ID on the other slave nodes.
It can be understood that, in the case that the data mapping file is determined to be a data mapping file corresponding to point data, the vertex ID returned by the graph database needs to be written in the local cache while the point data is written from the slave node to the graph database, where the slave node is used to import the point data and/or the edge data corresponding to the graph. And under the condition that the data mapping file is determined to be the data mapping file corresponding to the edge data, determining whether to perform distributed query on the vertex ID to other slave nodes according to whether the vertex IDs at two ends corresponding to the edge data exist in a local memory. If the vertex ID can be inquired in the local cache of the slave node, the edge data is directly written, otherwise, the vertex ID is inquired and the edge data is written into other slave nodes through a remote calling frame.
In an exemplary embodiment, after the master node determines boundary data imported from a map database according to a processing result of the distributed concurrent processing when the slave node completes the distributed concurrent processing on the data block, the method further includes: the master node collects the offset and the boundary information of the data block determined on each slave node through a remote calling frame and sorts the boundary information; acquiring complete boundary data according to the offset and the boundary information; and under the condition that the master node acquires the complete boundary data, informing the slave node to import the complete boundary data.
In short, when the slave node processes a local point data block or an edge data block, a boundary data offset range can be calculated according to a data line feed symbol, a starting offset and a terminating offset of the block, the master node determines the content of boundary data according to a summary arrangement of the boundary data offset list, and the slave node determines the offset and boundary information corresponding to the data block; and the master node collects the boundary information on each slave node through the remote calling frame, arranges the boundary information, acquires complete boundary data according to the offset and the boundary information and informs one slave node of the boundary data to import the boundary data.
In order to better understand the technical solutions of the embodiments and the alternative embodiments of the present invention, the following explains the flow of the data importing method of the graph database with reference to the examples, but the technical solutions of the embodiments of the present invention are not limited.
In the related art, the database data importing method has problems such as: firstly, a bottleneck exists in performance when large-data-volume graph data is imported; secondly, when the edge data is imported, the vertex needs to be inquired in the library.
Because the edge is established between the point and the point, when the edge data is imported, only the edge data is imported, which cannot be isolated, and the characteristic value of the associated point is also acquired. Therefore, the invention can be selected and implemented to solve the performance problem of importing the graph database by large-scale data and the problem that the vertex needs to be inquired in the database when the edge data is imported.
As an optional implementation manner, an optional embodiment of the present invention provides a method for performing distributed concurrent import based on a block distribution characteristic of an HDFS and an RPC remote method invocation tool, to solve the problem of import performance in a large data volume, and as shown in fig. 4, the method is an interactive schematic diagram for performing distributed concurrent import by the invocation tool of the optional embodiment of the present invention.
Step 1, starting operation, accessing import parameters, acquiring a file list Host list corresponding to a data file to be uploaded, and uploading the HDFS.
Step 2, transmitting the relevant file to a corresponding first slave node Follower 1;
step 2.1, initializing idMap on a first slave node Follower1 according to an input mapping mapper file;
step 2.2, registering a slave remote calling method, and starting the slave-server to wait for remote calling;
step 3, transmitting the related files to a main node importerLeader, and starting a master main program (namely a main process program);
step 4, the master node loads a map load schema according to the reference map schema file;
step 5, reading the point-edge integrated file, finding out point-edge critical offset, and outputting data block information (covering the offset and boundary information);
optionally, starting vertex data processing;
step 6, the main node takes the vertex file blocks information as a parameter, and remotely calls load of the slave to load the local blocks of the slave node;
step 6.1, leading in a vertex by a first slave node Follower1, and enabling a put longID to be in idMap, namely storing the vertex ID corresponding to the vertex in the idMap;
step 7, returning all local blocks boundary data lists list of the slave;
step 8, after the master node processes local blocks corresponding to the slave nodes in all slave nodes and finishes returning, summarizing boundary data of all vertex blocks;
and 9, only remotely calling slave on the master local machine to process all boundary data and importing the boundary data.
Optionally, starting edge data processing;
step 10, determining the data to be processed as an edge file;
step 11, the main node takes the vertex file blocks information as a parameter, and remotely calls load of the slave to load local blocks of the slave node;
step 12, firstly, detecting whether the local id Map of the first slave node Follower1 has the vertex id (long id) of left and right vertexes, and if so, directly importing an edge; if any one of the records is not present, storing the record;
step 13, if the left or right vertex cannot find the long Id in the local Id Map, sequentially and remotely calling the search on other slave nodes, for example, searching in a second slave node, Follower 2;
step 14, returning Array List < Edge Record > containing Long Id, that is, determining a corresponding List return value first slave node Follower1 after finding out Edge data containing vertex Id in second slave node Follower 2;
step 15, importing edge data of which the vertex long Id is not found locally before according to the edge data containing the vertex ID returned by the second slave node Follower 2;
step 16, the first slave node Follower1 returns all local blocks boundary data list of the slave;
step 17, after the master node processes local blocks corresponding to the slave nodes in all slave nodes and finishes returning, summarizing boundary data of all vertex blocks;
as an optional implementation manner, an optional embodiment of the present invention further provides a vertex ID cache query scheme, which solves the problem that a graph database needs to be queried when importing edge data, and particularly compatibly supports an input format of point-edge separation data and point-edge integration data. The method can effectively improve the graph database data importing capacity in large data volume.
Optionally, fig. 5 is a schematic flowchart of vertex ID cache lookup according to an alternative embodiment of the present invention, including the following steps:
step 1: the data File is uploaded to an HDFS (Hadoop Distributed File System, HDFS for short) corresponding to a graph database by introducing a graph name, a graph schema File (equivalent to a graph metadata File in the embodiment of the present invention), a data-map data mapping File of point-to-edge mapping, and a data File execution script program.
Step 2: and (3) detecting whether the diagram name transmitted in the step (1) exists in a database, if not, creating a diagram according to the diagram name and loading the diagram schema according to the schema file.
And step 3: the leader nodes sequentially process the files according to the point edge mapping file, obtain a block data block distribution list of the files on the HDFS, and uniformly distribute local blocks located on each follower node (which is equivalent to a slave node in the implementation of the invention) so as to ensure that each follower node processes the local blocks.
And 4, step 4: a leader (corresponding to a master node in the implementation of the present invention) notifies each follower node to start concurrent processing of local blocks allocated to the follower node through a Remote Procedure Call (RPC) Remote Call framework, calculates boundary data, and sends the boundary data to the leader node through an RPC Remote Call, and the leader node imports the boundary data after receiving all the boundary data (points or edges).
And 5: if the processing point file is a processing point file, a follower node (equivalent to a slave node in the implementation of the present invention) needs to write vertex ID (equivalent to vertex ID in the implementation of the present invention) returned by a graph database into a local memory when importing point data, so as to be used for performing distributed query on vertex ID in a cache when importing edge data in a subsequent step.
Step 6: if the edge file is processed, each follower node firstly inquires the vertex id in the local cache, and if the vertex id cannot be found in the local cache, other remote follower nodes are inquired through the RPC remote call framework.
It should be noted that the application scenarios of the above alternative embodiments mainly include: and storing an application scene of importing point edge graph data and a scene related to a cache vertex ID scheme when importing the edge data by a graph database based on an HDFS component at the rear end.
By the embodiment, distributed concurrent import is performed on the basis of the block distribution characteristic of the HDFS and the RPC remote method calling tool, each folder working node processes local blocks on the basis of the block distribution characteristic of the HDFS, load balance is fully ensured, the whole data import performance is accelerated, the problem of edge import caused by the characteristic of association of data points of a database is solved, through a vertex id cache scheme, when each folder working node leads, vertex ids are cached in a memory, when the edges are led, other folders are inquired in a distributed mode, the import speed of edge data can be accelerated, the problem that the graph database needs to be inquired when the edge data is imported is solved, and the performance of large-scale data import of the graph database is improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another aspect of the embodiments of the present invention, there is also provided a data importing apparatus for implementing the data importing method of the graph database. As shown in fig. 6, the apparatus includes:
a determiningmodule 62, configured to determine a data file to be uploaded, where the data file includes: a data mapping file of point data and edge data corresponding to the graph data;
aprocessing module 64, configured to process the data mapping file through a master node to obtain a data block distribution list corresponding to the data mapping file and a slave node;
thedistribution module 66 is configured to distribute, through the data block distribution list, the data block to be processed corresponding to the slave node, and distribute the processing task corresponding to the data block;
and an importingmodule 68, configured to, when the slave node completes the distributed concurrent processing on the data block, determine, by the master node, boundary data imported from the database according to a processing result of the distributed concurrent processing.
Through above-mentioned device, confirm the data file who waits to upload, wherein, the data file includes: a data mapping file of point data and edge data corresponding to the graph data; processing the data mapping file through the master node to obtain a data block distribution list corresponding to the data mapping file and the slave node; the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list and distributes processing tasks corresponding to the data blocks; and under the condition that the slave nodes finish the distributed concurrent processing on the data blocks, the master node determines the boundary data imported by the database according to the processing result of the distributed concurrent processing. That is to say, the data reading network IO overhead under the distributed situation is saved through the distributed import framework of the master-slave structure and the characteristic that each slave node processes the local block data block, the cost of querying a database when importing the side data is avoided by the vertex cache scheme when importing the point data and the distributed vertex query scheme when importing the side data, the import efficiency of importing the database into a large-scale data file is further improved, the problems that the performance of the database is bottleneck when importing large-data-volume graph data and the import efficiency of the side data is low in the prior art are solved, and the database data import capacity when importing large data volume is effectively improved.
In an exemplary embodiment, the importing module is configured to obtain, by the master node, a boundary offset in a processing result of the data block on each slave node; and performing summary operation on the boundary offset through a preset algorithm to obtain imported boundary data corresponding to the boundary offset.
In short, after determining the boundary offset of each slave node, in order to quickly determine the corresponding boundary data, the boundary offset of each slave node corresponding to the master node is summed, and the boundary data corresponding to the boundary offset is determined.
In an exemplary embodiment, the apparatus further includes: the creating module is used for determining whether the graph name corresponding to the data file to be uploaded exists in the graph database, wherein the data file further comprises: the map metadata file corresponding to the map data, wherein the map name is used for indicating the name of the data file corresponding to the map data; and under the condition that the graph name does not exist in the graph database, creating a new graph data file in the graph database according to the graph name and the graph metadata file, and loading a target graph corresponding to the new graph data file.
As an alternative embodiment, the overall process of the present invention is as follows:
determining whether a graph name corresponding to a data file to be uploaded to a graph database exists in the graph database, wherein the data file comprises: map schema metadata file, data mapping file of corresponding point data and edge data of the map; under the condition that the graph name does not exist in the graph database, creating a new graph and a graph schema corresponding to the loading graph in the graph database according to the graph name and the graph schema metadata file; based on the block distribution characteristic of the HDFS and the idea of locally processing local block data, the master node distributes block blocks of a data file, the block blocks on a certain slave node are distributed to a certain slave node, and block data processing tasks are distributed to the slave nodes; processing the distributed block data blocks on the node from the node, caching the ID of the vertex into a memory when importing the point data, and directly acquiring the IDs of the vertexes at two ends from the memories of other slave nodes when importing the edge data so as to avoid querying a database; and the slave nodes collect the boundary offset of the block blocks to the master node, and the master node processes and imports the combined boundary data.
Further, the data mapping file is processed through the master node, a data block distribution list corresponding to the data mapping file and the slave node is determined according to a principle that the local block data is processed by the local node, a data processing task is distributed, and the slave node performs distributed concurrent data processing.
In an exemplary embodiment, the importing module is further configured to, when it is determined that the data mapping file is a data mapping file corresponding to point data, instruct the slave node to write the vertex ID into a local cache by using the vertex ID returned by the graph database to indicate that the slave node writes the vertex ID into the local cache, where the slave node is configured to import point data and/or edge data of graph data corresponding to a data file to be uploaded; and under the condition that the data mapping file is determined to be the data mapping file corresponding to the edge data, determining whether the IDs of the vertexes at two ends corresponding to the edge data exist in the local memory of the slave node so as to determine the confirmation mode of the edge data.
In an exemplary embodiment, the importing module is further configured to perform distributed query on the vertex IDs from other slave nodes corresponding to the master node when the vertex IDs at two ends corresponding to the edge data do not exist in the local memory of the slave node; and under the condition that the vertex IDs at two ends corresponding to the edge data exist in the local memory of the slave node, directly writing the edge data according to the vertex IDs inquired from the cache of the local memory of the slave node.
In an exemplary embodiment, the importing module is further configured to obtain, by the master node through a remote invocation framework, information of vertex IDs cached in the other slave nodes for querying; and when the query result corresponding to the query finds the vertex ID corresponding to the edge data, indicating the writing of the edge data by using the vertex ID on the other slave nodes.
It can be understood that, in the case that the data mapping file is determined to be a data mapping file corresponding to point data, the vertex ID returned by the graph database needs to be written in the local cache while the point data is written from the slave node to the graph database, where the slave node is used to import the point data and/or the edge data corresponding to the graph. And under the condition that the data mapping file is determined to be the data mapping file corresponding to the edge data, determining whether to perform distributed query on the vertex ID to other slave nodes according to whether the vertex IDs at two ends corresponding to the edge data exist in a local memory. If the vertex ID can be inquired in the local cache of the slave node, the edge data is directly written, otherwise, the vertex ID is inquired and the edge data is written into other slave nodes through a remote calling frame.
In an exemplary embodiment, the import module is further configured to summarize, by the master node, the offset and the boundary information of the data block determined on each slave node through a remote invocation framework, and sort the boundary information; acquiring complete boundary data according to the offset and the boundary information; and under the condition that the master node acquires the complete boundary data, informing the slave node to import the complete boundary data.
In short, when the slave node processes a local point data block or an edge data block, a boundary data offset range can be calculated according to a data line feed symbol, a starting offset and a terminating offset of the block, the master node determines the content of boundary data according to a summary arrangement of the boundary data offset list, and the slave node determines the offset and boundary information corresponding to the data block; and the master node collects the boundary information on each slave node through the remote calling frame, arranges the boundary information, acquires complete boundary data according to the offset and the boundary information and informs one slave node of the boundary data to import the boundary data.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or assembly referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two components can be directly connected or indirectly connected through an intermediate medium, and the two components can be communicated with each other. When an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, determining a data file to be uploaded, wherein the data file comprises: a data mapping file of point data and edge data corresponding to the graph data;
s2, processing the data mapping file through a master node to obtain a data block distribution list corresponding to the data mapping file and a slave node;
s3, the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list, and distributes the processing tasks corresponding to the data blocks;
and S4, under the condition that the slave node finishes the distributed concurrent processing on the data blocks, the master node determines the boundary data imported by the database according to the processing result of the distributed concurrent processing.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining a data file to be uploaded, wherein the data file comprises: a data mapping file of point data and edge data corresponding to the graph data;
s2, processing the data mapping file through a master node to obtain a data block distribution list corresponding to the data mapping file and a slave node;
s3, the master node distributes the data blocks to be processed corresponding to the slave nodes through the data block distribution list, and distributes the processing tasks corresponding to the data blocks;
and S4, under the condition that the slave node finishes the distributed concurrent processing on the data blocks, the master node determines the boundary data imported by the database according to the processing result of the distributed concurrent processing.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

CN202110858726.9A2021-07-282021-07-28Data importing method and device of graph database, storage medium and electronic equipmentActiveCN113468275B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110858726.9ACN113468275B (en)2021-07-282021-07-28Data importing method and device of graph database, storage medium and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110858726.9ACN113468275B (en)2021-07-282021-07-28Data importing method and device of graph database, storage medium and electronic equipment

Publications (2)

Publication NumberPublication Date
CN113468275Atrue CN113468275A (en)2021-10-01
CN113468275B CN113468275B (en)2024-07-30

Family

ID=77883049

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110858726.9AActiveCN113468275B (en)2021-07-282021-07-28Data importing method and device of graph database, storage medium and electronic equipment

Country Status (1)

CountryLink
CN (1)CN113468275B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114817262A (en)*2022-04-272022-07-29电子科技大学Graph traversal algorithm based on distributed graph database
CN118820535A (en)*2024-09-192024-10-22杭州悦数科技有限公司 A method and device for concurrently importing graph database data based on dependency tree

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104809168A (en)*2015-04-062015-07-29华中科技大学Partitioning and parallel distribution processing method of super-large scale RDF graph data
US20160092534A1 (en)*2014-09-252016-03-31Oracle International CorporationDatabase snapshots
US20160110473A1 (en)*2014-10-162016-04-21Adp, LlcGraph Loader for a Flexible Graph System
CN109344269A (en)*2018-08-142019-02-15北京奇虎科技有限公司 Graph database writing method, electronic device and computer-readable storage medium
CN109670089A (en)*2018-12-292019-04-23颖投信息科技(上海)有限公司Knowledge mapping system and its figure server
CN110427359A (en)*2019-06-272019-11-08苏州浪潮智能科技有限公司A kind of diagram data treating method and apparatus
US10719501B1 (en)*2017-03-032020-07-21State Farm Mutual Automobile Insurance CompanySystems and methods for analyzing vehicle sensor data via a blockchain
CN111694834A (en)*2019-03-152020-09-22杭州海康威视数字技术股份有限公司Method, device and equipment for putting picture data into storage and readable storage medium
CN111858730A (en)*2020-07-102020-10-30苏州浪潮智能科技有限公司 A data import and export device, method, device and medium for graph database
CN112287182A (en)*2020-10-302021-01-29杭州海康威视数字技术股份有限公司Graph data storage and processing method and device and computer storage medium
CN112528090A (en)*2020-12-112021-03-19北京百度网讯科技有限公司Graph data storage method and storage device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20160092534A1 (en)*2014-09-252016-03-31Oracle International CorporationDatabase snapshots
US20160110473A1 (en)*2014-10-162016-04-21Adp, LlcGraph Loader for a Flexible Graph System
CN104809168A (en)*2015-04-062015-07-29华中科技大学Partitioning and parallel distribution processing method of super-large scale RDF graph data
US10719501B1 (en)*2017-03-032020-07-21State Farm Mutual Automobile Insurance CompanySystems and methods for analyzing vehicle sensor data via a blockchain
CN109344269A (en)*2018-08-142019-02-15北京奇虎科技有限公司 Graph database writing method, electronic device and computer-readable storage medium
CN109670089A (en)*2018-12-292019-04-23颖投信息科技(上海)有限公司Knowledge mapping system and its figure server
CN111694834A (en)*2019-03-152020-09-22杭州海康威视数字技术股份有限公司Method, device and equipment for putting picture data into storage and readable storage medium
CN110427359A (en)*2019-06-272019-11-08苏州浪潮智能科技有限公司A kind of diagram data treating method and apparatus
CN111858730A (en)*2020-07-102020-10-30苏州浪潮智能科技有限公司 A data import and export device, method, device and medium for graph database
CN112287182A (en)*2020-10-302021-01-29杭州海康威视数字技术股份有限公司Graph data storage and processing method and device and computer storage medium
CN112528090A (en)*2020-12-112021-03-19北京百度网讯科技有限公司Graph data storage method and storage device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卢民军: "审计数据知识图谱的构建与实现——基于Neo4j图数据库", 15 January 2021 (2021-01-15), pages 154 - 157*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114817262A (en)*2022-04-272022-07-29电子科技大学Graph traversal algorithm based on distributed graph database
CN118820535A (en)*2024-09-192024-10-22杭州悦数科技有限公司 A method and device for concurrently importing graph database data based on dependency tree

Also Published As

Publication numberPublication date
CN113468275B (en)2024-07-30

Similar Documents

PublicationPublication DateTitle
CN111241177B (en) Data collection method, system and network equipment
US20240078234A1 (en)Apparatus, method and storage medium for database pagination
CN111258978A (en) a method of data storage
CN112711612B (en) Data processing methods, devices, storage media and electronic devices
CN110781184A (en)Data table construction method, device, equipment and storage medium
CN109167840B (en)Task pushing method, node autonomous server and edge cache server
CN108762979B (en)Terminal information backup method and backup device based on matching tree
CN110245128B (en)Meta-model establishing method and device, storage medium and electronic device
CN113468275A (en)Data importing method and device of graph database, storage medium and electronic equipment
CN111488377A (en)Data query method and device, electronic equipment and storage medium
CN110928681A (en)Data processing method and device, storage medium and electronic device
CN116719872A (en)Database deployment method and database management platform
CN109726219A (en)The method and terminal device of data query
CN104636368A (en)Data retrieval method and device and server
CN112434189A (en)Data query method, device and equipment
CN110866065A (en)Data exchange system, exchange method and storage medium
CN110222046A (en)Processing method, device, server and the storage medium of table data
CN115794876A (en)Fragment processing method, device, equipment and storage medium for service data packet
CN113806216A (en) Interface test method, device, medium and equipment
CN102170476B (en)Cloud computing method and device based on cloud node autonomic learning
CN118069024A (en)Data storage method and device, storage medium and electronic equipment
CN112351079B (en) A space application and data integrated packaging system and method based on data box
CN115221167A (en) Static data storage and query method, device and electronic device
CN116049180A (en)Tenant data processing method and device for Paas platform
CN112699149B (en) Target data acquisition method, device, storage medium and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp