Disclosure of Invention
The embodiment of the invention provides a distributed database system and a self-adaptive method thereof, which are used for solving the problems of unbalanced load among nodes, difficulty in adjustment of data distribution, unsmooth data migration and complex maintenance in the conventional distributed database system.
The invention discloses a distributed database system, which comprises a control node, a client API and a data node
The control node is used for managing the data nodes of the system, calculating the data route of the system and broadcasting the data route to the client API and the data nodes;
the client API is used for providing a read/write data interface for a data visitor and forwarding the received data operation request to a corresponding data node according to a locally cached data route;
and the data node is used for storing the data fragments and processing the received data operation request according to the data route of the local cache.
Preferably, the data nodes are deployed in the system in a manner of virtual machines or computing storage hosts.
Preferably, the client AP is operated by a data accessor in a dynamic library or plug-in manner.
Preferably, the control node is configured to monitor the number and state changes of the data nodes in the system in real time, and execute node capacity expansion/capacity reduction operation when the number of the data nodes changes; and when the state of the data node changes, updating the state of the corresponding data node in the data route and broadcasting the updated data route.
Preferably, the client API is configured to calculate, according to a data keyword in the received data operation request, a data fragment corresponding to the requested data, and search a data node where each data fragment is located in a data route cached locally; and forwarding the data operation request to a corresponding data node according to a data node selection rule of the local cache.
Preferably, the data node is configured to, after receiving the data operation request, search, in a data route of a local cache, whether a data fragment in the data operation request is stored in the data node; when the data fragment is not stored in the data node, searching the data node where the data fragment is located in a data route of a local cache, and forwarding the data operation request to the found data node; and when the data fragments are stored in the data node, executing the data operation request and returning a data operation response to the data visitor.
Preferably, the data node is configured to report a self state to the control node periodically; when the link changes, reporting the self state to the control node in real time;
the control node is used for periodically updating the data route.
Preferably, the data node is configured to perform a data recovery operation and a data copy operation;
the control node is configured to perform domain division on the data node according to a preset domain division rule.
The invention further discloses a self-adaptive method of the distributed database system, which executes the following steps after the system is powered on:
controlling the data routing of the node computing system and broadcasting the data routing to the client API and all the data nodes;
a client API receives a data operation request of an accessor, and forwards the request to a corresponding data node according to a data route of a local cache;
and the data node processes the received data operation request and returns a data operation response to the visitor.
Preferably, before the data routing of the computing system, the control node further performs the following steps:
and carrying out domain division on the data nodes according to a preset domain division rule.
Preferably, the domain division rule is: if the number of hosts/servers to which the data node belongs is 1, dividing the data node into a left domain or a right domain; if the number of the hosts/servers to which the data nodes belong is more than or equal to 2, dividing the data nodes into a left domain and a right domain according to the principle of uniform distribution of the hosts/servers to which the data nodes belong, and enabling the data nodes belonging to the same host/server to be located in the same domain.
Preferably, the control node calculates the number of data fragments to be distributed on each data node according to the number of data nodes and the number of data fragments of the system, and generates a data route.
Preferably, the step of forwarding the request to the corresponding data node by the client API according to the locally cached data route specifically includes:
calculating corresponding data fragments according to the data keywords in the data operation request;
searching a data node corresponding to each data fragment in a data route of a local cache;
and respectively forwarding the data operation requests to the found data nodes according to a preset data node selection rule.
Preferably, the data node selection rule is as follows:
when the number of the data nodes corresponding to the searched data fragments is 1, directly forwarding the data operation request to the data nodes;
when the number of data nodes corresponding to the searched data fragment is greater than 1, judging the type of the data operation request, if the type is write operation, checking the copy number of the data fragment in each data node and the state of the data node, and sending the data operation request to the data node with a normal state and a small copy number; and if the data operation request is a read operation, sending the data operation request to the data node with the minimum load.
Preferably, the data node processes the received data operation request by the following method:
searching whether the data fragments in the data operation request are stored in the data node or not in the data route of the local cache; if yes, executing the data operation request, and returning a data operation response to the data visitor; otherwise, searching the data node where the data fragment is located in the data route of the local cache, and forwarding the data operation request to the found data node.
Preferably, the execution data operation request specifically includes:
when the data operation request is write operation, adding, modifying or deleting the copy of the data fragment stored in the local according to the operation mode of the visitor;
and when the data operation request is a read operation, reading data from the local copy stored in the data fragment.
Preferably, when the data operation request is a write operation, after the data operation request is processed, a data copy process is executed, specifically:
recording data changed by the data fragments or full data;
and searching data nodes where the rest copies of the data fragments are located in the data route of the local cache, and copying the data or the full data changed by the data fragments to the data nodes where the rest copies of the data fragments are located.
Preferably, the control node further performs the following steps during the operation of the system:
whether data nodes are newly added or deleted in the real-time monitoring system or not is judged, and if the data nodes are newly added, node expansion operation is executed; and if the data node is deleted, executing the node capacity reduction operation.
Preferably, the node capacity expansion operation specifically includes the following steps:
calculating a first copy data fragment list and a second copy data fragment list to be migrated to the newly added data node;
distributing a third copy for the data fragment to be migrated on the newly added data node, recalculating the data route of the system and broadcasting;
waiting for the newly added data node to recover the data;
receiving the self state reported by the newly added data node, recalculating the data route of the system according to a preset capacity expansion rule and broadcasting;
informing all data nodes to delete the third copies of all local data fragments;
and after the deletion of all the data nodes is confirmed to be completed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Preferably, the step of calculating the first replica data fragment list and the second replica data fragment list to be migrated to the newly added data node specifically includes:
dividing the total number of the data fragments by the total number of the data nodes including the newly added data nodes, and calculating the average number of the data fragments to be stored by each data node;
subtracting the calculated average data fragment number from the current data fragment number of each data node, and calculating the data fragment number to be transferred from each original data node to the newly added data node;
and the first copies of all the data fragments to be migrated from the original data nodes form a first copy data fragment list of the newly added data nodes, and the second copies of all the data fragments to be migrated from the original data nodes form a second copy data fragment list of the newly added data nodes.
Preferably, the preset capacity expansion rule is as follows:
informing the original data node to switch the first copy of the data fragment to be locally migrated to the newly added data node into the third copy; meanwhile, the newly added data node is informed to switch the third copy of the corresponding data fragment into the first copy;
informing the original data node to switch the second copy of the data fragment to be locally migrated to the newly added data node into a third copy; and simultaneously informing the newly added data node to switch the third copy of the corresponding data fragment into the second copy.
Preferably, the node capacity reduction operation specifically includes the following steps:
calculating a first copy data fragment list and a second copy data fragment list on each remaining node;
distributing a third copy for the data fragment to be migrated on the rest data nodes, recalculating the data route of the system and broadcasting;
waiting for the other data nodes to recover the data;
waiting for other data nodes to copy data;
receiving self states reported by other data nodes, recalculating the data route of the system and broadcasting according to a preset capacity reduction rule;
informing all data nodes to delete the third copies of all local data fragments;
and after the deletion of all the data nodes is confirmed to be completed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Preferably, the step of calculating the first replica data fragment list and the second replica data fragment list on each remaining node specifically includes:
dividing the total number of the data fragments by the number of the remaining data nodes, and calculating the average number of the data fragments to be stored by each data node in the remaining data nodes;
subtracting the current data fragment number on each residual data node from the average data fragment number, and calculating the data fragment number to be migrated from the node to be closed on each residual data node;
and according to a preset data fragment distribution principle, distributing the first copy and the second copy of the data fragment on the data node to be deleted to the remaining data nodes to obtain a first copy data fragment list and a second copy data fragment list on each remaining node.
Preferably, the preset capacity reduction rule is as follows:
informing the data node to be deleted to switch the first copy of the data fragment to be migrated into the third copy; simultaneously informing the residual data nodes storing the third copy of the data fragment to switch the third copy of the data fragment into the first copy;
informing the data node to be deleted to switch the second copy of the data fragment to be migrated into the third copy; and simultaneously informing the residual data nodes storing the third copy of the data fragment to switch the third copy of the data fragment into the second copy.
Preferably, the data fragment distribution principle is as follows:
the number of data fragments on each data node is the same as much as possible; and is
The first copy and the second copy of each data fragment are distributed on data nodes of different domains; and
the second copies of all the first replica data fragments on each data node are evenly distributed on all the data nodes in different domains.
Preferably, the data node recovers the data by:
inquiring a local data route, and acquiring a data node where a third copy of the first copy data fragment on the node is located;
copying corresponding data fragments to the data node where the third copy is located;
and after the recovery is finished, reporting the self state to the control node.
Preferably, the added data node is a data node newly added to the system;
the deleted data node includes: the data nodes needing to be deleted because the burden is less than the preset value and the data nodes needing to be deleted because of receiving a user deleting instruction.
Preferably, the client API determines the number of fragments requesting data by taking a HASH value for the data key and then taking a module value of the total number of data fragments for the HASH value.
Compared with the prior art, the invention does not need to pass through a special proxy access node, has shorter data access path and higher efficiency; data fragmentation is stored and managed, data nodes are not divided into a main data node and a standby data node, and multiple copy data of the same fragmentation can be mutually copied, so that loads among nodes of a distributed database are more balanced; the data routing is automatically calculated and distributed, the data migration process is controllable, the data migration is smoother and more uniform, manual intervention is not needed, and access is not interrupted.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a block diagram of a distributed database system according to the present invention; the embodiment includes a control node 10, a client API 20, and data nodes 30, and the embodiment includes 4 data nodes 30; wherein,
the control node 10 is used for managing a data node 30 of the system, calculating the data route of the system and broadcasting the data route to the client API 20 and the data node 30; the method specifically comprises the following steps:
periodically updating data routing and broadcasting;
monitoring the number and state change of the data nodes 30 in the system in real time, and executing node capacity expansion/capacity reduction operation when the number of the data nodes 30 in the system changes;
when the state of the data node 30 changes, updating the state of the corresponding data node 30 in the data route and broadcasting the updated data route; and
according to a preset domain division rule, carrying out domain division on the data nodes 30;
the domain division rule is as follows:
if the number of hosts/servers to which the data node belongs is 1, dividing the data node into a left domain or a right domain; if the number of the hosts/servers to which the data node belongs is more than or equal to 2, dividing the data node into a left domain and a right domain according to the principle of uniform distribution of the hosts/servers to which the data node belongs (even if the number of the hosts/servers distributed in the left domain and the right domain is the same as much as possible), and enabling the data node belonging to the same host/server to be located in the same domain.
For example, as shown in fig. 1, 4 data nodes are numbered from left to right in sequence as 1-4; if the 4 data nodes belong to the same 1 host/server, dividing the 4 data nodes into a left domain or a right domain; if 4 data nodes belong to the same 2 hosts/servers, assuming that the data nodes numbered 1 and 2 belong to a first host/server and the data nodes numbered 3 and 4 belong to a second host/server; dividing data nodes 1 and 2 belonging to a first host/server into a left domain, and dividing data nodes 3 and 4 belonging to a second host/server into a right domain, so that each domain has 2 data nodes; or assuming that the data nodes numbered 1, 2 and 3 belong to a first host/server and the data node numbered 4 belongs to a second host/server, dividing the data nodes 1, 2 and 3 belonging to the first host/server into a left domain, and dividing the data node 4 belonging to the second host/server into a right domain, so that the left domain has 3 data nodes; the right domain has 1 data node;
in order to achieve balance of data fragmentation and data reliability, the control node 10 should compute data routes according to the following data fragmentation distribution principle:
the number of data fragments on each data node is the same as much as possible; and is
The first copy and the second copy of each data fragment are distributed on data nodes of different domains; and
the second copies of all the first copy data fragments on each data node are uniformly distributed on all the data nodes in different domains; for example, the current data node is located in the left domain, and there are 10 first copies of the data fragments on the current data node, and according to the distribution principle, the 10 second copies of the data fragments should be uniformly distributed on all the data nodes in the right domain, and assuming that there are 2 data nodes in the right domain, 5 of the 10 second copies of the data fragments are distributed on each data node in the right domain.
As shown in fig. 1, in this embodiment, the distributed database system has 4 data nodes 30, and 16 data fragments coexist and are stored, where first copies of the data fragments are respectively marked with numbers 1 to 16; the second copies are respectively marked with numbers 1 '-16', and each data node 30 stores a first copy of 4 data slices and a second copy of 4 data slices; the data shards in the first copy are completely different from the data shards in the second copy.
The client API 20 is configured to provide an interface for reading/writing data for a data visitor, and send a received data operation request to a corresponding data node 30 according to a locally cached data route; the method specifically comprises the following steps:
calculating corresponding data fragments according to data keywords in the received data operation request, and searching a data node 30 where each data fragment is located in a data route of a local cache; the algorithm for calculating the data fragments can be a mode of taking a HASH value of a data keyword and then taking the total number of the data fragments of the HASH value to determine the number of the fragments requesting the data; the data fragment can also be divided according to the prefix and suffix range of the data keyword;
forwarding the data operation request to the corresponding data node 30 according to the data node selection rule of the local cache;
the client API 20 is operated by a data visitor in a dynamic library/plug-in mode;
the data node 30 is deployed in the system in a virtual machine or computer storage host mode, and can be configured to belong to a left domain or a right domain; for:
storing the data fragments;
the data fragmentation refers to that data are segmented into a plurality of fragments according to data keywords, the data of different fragments are different, each data fragment is provided with a first copy, a second copy and a third copy, the third copy is only temporarily used in the process of increasing and decreasing data nodes, the data among the copies are the same, and the copies of the same data fragment are stored on the data nodes in different domains according to the data fragmentation distribution principle;
caching the received data route and processing the received data operation request, wherein the data operation request comprises read and write operations; the method specifically comprises the following steps: after receiving the data operation request, searching whether the data fragment in the data operation request is stored in the data node 30 in the data route of the local cache; when the data fragment is not stored in the data node 30, searching the data node 30 where the data fragment is located in the data route of the local cache, and forwarding the data operation request to the found data node 30; when the data fragment is stored in the data node 30, the data operation request is executed, and a data operation response is returned to a data visitor;
when restarting or data routing changes, executing data recovery operation;
when the data fragment is changed, for example, the content of the data fragment is changed after the write operation is executed, the changed data or the full data is recorded, and the data copy operation is executed; copying the changed data or the full data to other data nodes 30 containing the same data fragment;
periodically reporting the self state to the control node 10; and reporting the self state to the control node 10 in real time when the link changes.
The topology of the distributed database system is hidden for the data accessor, and the decoupling of the distributed database and the data accessor is realized.
FIG. 2 is a flow chart of a preferred embodiment of the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S101: the system is powered on, the control node 10 divides the domain of the data node 30 according to a preset domain division rule, then calculates the data route of the system and broadcasts the data route to the client API 20 and all the data nodes 30;
in this step, according to the number of data nodes 30, the number of data fragments, and a preset route calculation principle of the system, a first copy list and a second copy list of the data fragments that need to be distributed on each data node 30 are calculated, and a data route is generated.
The control node 10 is also responsible for data node discovery and state management in the system operation process, which are respectively shown in fig. 3 and 4;
step S102: after the system initialization is completed, the client API 20 receives a data operation request of an accessor;
step S103: calculating corresponding data fragments according to the data keywords in the data operation request;
the method comprises the following steps of determining the number of fragments requesting data by adopting a mode of taking a HASH value of a data keyword and then taking a module value of the total number of the data fragments from the HASH value; the data fragment can also be divided according to the prefix and suffix range of the data keyword;
step S104: searching a data node 30 corresponding to each data fragment in a data route of a local cache, and respectively forwarding the data operation request to the corresponding data node 30 according to a preset data node selection rule;
the data routing is a corresponding relationship between each data fragment and the data node 30.
The data node selection rule is as follows: when the number of the data nodes 30 corresponding to the searched data fragments is 1, directly forwarding the data operation request to the data nodes 30;
when the number of the data nodes 30 corresponding to the searched data fragment is greater than 1, judging the type of the data operation request, if the data operation request is write operation, checking the copy number of the data fragment in each data node 30 and the state of the data node 30, and sending the data operation request to the data node 30 with a normal state and a small copy number; and if the data operation request is a read operation, sending the data operation request to the data node 30 with the minimum load.
Step S105: searching whether the data fragments in the data operation request are stored in the data node 30 or not in the data route of the local cache in the data operation request received by the data node 30; if yes, go to step S106; otherwise, go to step S107;
the step of checking whether the data fragment of the request data belongs to the node or not by analyzing the data keyword in the data operation request; if so, the data fragment corresponding to the request data is stored in the data node 30, otherwise, the data fragment corresponding to the request data is not stored in the data node 30.
Step S106: executing the data operation request, returning a data operation response to a data visitor, and finishing the current data fragmentation processing;
in this step, the request for executing the data operation specifically includes:
when the data operation request is write operation, adding, modifying or deleting the copy of the data fragment stored locally according to the operation mode of the visitor;
and when the data operation request is a read operation, reading data from the copy of the data fragment stored locally.
In the present invention, when the data operation request is a write operation, after the data operation request is processed, the data replication process shown in fig. 5 is also executed; that is, after the data node 30 modifies the local data, the modified data needs to be copied to the data node 30 where the other copies of the same segment are located.
Step S107: and searching the data node 30 where the data fragment is located in the data route of the local cache, and forwarding the data operation request to a corresponding data node which is normally communicated with the data node according to a preset data node selection rule.
That is, if the data fragment corresponding to the data operation request is in the data node 30, the data fragment is processed locally, and the local data is read and written; and if the data fragment corresponding to the data operation request is not in the data node 30, forwarding the data fragment to the corresponding node for processing.
FIG. 3 is a flowchart illustrating a preferred embodiment of a data node discovery process in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S201: the control node 10 monitors whether a data node 30 is newly added or deleted in the system in real time, and if the data node 30 is newly added, the step S202 is executed; if the data node 30 is found to be deleted, step S203 is executed;
the newly added data node is the newly added data node;
the deleted data node includes: the data nodes needing to be deleted because the burden is less than the preset value and the data nodes needing to be deleted because of receiving a user deleting instruction.
Step S202: executing node capacity expansion operation, and finishing the current discovery processing;
the node capacity expansion operation is specifically shown in fig. 6;
step S203: and executing the node capacity reduction operation, and finishing the current discovery processing.
The node capacity reduction operation is shown in detail in fig. 7.
Fig. 4 is a flowchart illustrating a preferred embodiment of a data node state management process in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S301: the control node 10 receives the self state reported by the data node 30;
step S302: checking the state, if the state is normal, finishing the current state processing; if the result is abnormal, step S303 is executed;
step S303: the status of the data node 30 in the data route is updated and the updated data route is broadcast.
FIG. 5 is a flow chart of a preferred embodiment of data replication in the adaptive method of a distributed database system according to the present invention; the embodiment comprises the following steps:
step S301: the data node 30 executing the write operation records the data segment changed data or the full data of the current write operation;
step S302: searching data nodes 30 where the rest copies of the data fragments are located in a data route of a local cache;
step S303: and copying the data changed by the data fragment or the full data to the data node 30 where the rest copies of the data fragment are located.
The data node 30 where the changed data or the full data is copied to other copies of the same segment includes allowing the data node 30 storing the first copy to write data and then copying the changed data or the full data to the data node 30 where the second and third copies of the segment are located, and also allowing the data node 30 storing the second and third copies to write data and then copying the changed data or the full data to the data node 30 where the first and third copies or the first and second copies of the segment are located, that is, allowing mutual copying among data copies, and solving a conflict problem that the same data among copies of the same segment may be copied to each other by comparing update time stamps of the data, that is, determining whether to change the data by merging and overwriting or abandon the change.
In the data replication process, the data nodes of the replicated data can synchronously complete corresponding data updating or asynchronously complete corresponding data updating.
Fig. 6 is a flowchart of a preferred embodiment of a node capacity expansion operation in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S401: the control node 10 calculates a first copy data fragment list and a second copy data fragment list to be migrated to the newly added data node 30; the method specifically comprises the following steps:
dividing the total number of the data fragments by the total number of the data nodes including the newly added data node 30 to calculate the average number of the data fragments to be stored by each data node, wherein the average number of the data fragments is less than the current number of the data fragments of the original data node 30;
subtracting the calculated average data fragment number from the current data fragment number of each original data node 30, and calculating the data fragment number to be migrated from each original data node 30 to the newly added data node 30;
a first copy of all data fragments to be migrated from the original data node 30 forms a first copy data fragment list of the newly added data node 30, and a second copy of all data fragments to be migrated from the original data node 30 forms a second copy data fragment list of the newly added data node 30; the data in the list at this time is empty;
step S402: distributing a third copy for the data fragment to be migrated on the newly added data node 30; re-routing and broadcasting data of the system;
step S403: waiting for the newly added data node 30 to recover the data;
the data node data recovery process is shown in fig. 8;
step S404: receiving the self state reported by the newly added data node 30, recalculating the data route of the system according to a preset capacity expansion rule and broadcasting;
the preset capacity expansion rule is as follows:
informing the original data node 30 to switch the first copy of the data fragment to be locally migrated to the newly added data node 30 into the third copy; meanwhile, the newly added data node is informed to switch the third copy of the corresponding data fragment into the first copy;
informing the original data node 30 to switch the second copy of the data fragment to be locally migrated to the newly added data node 30 into the third copy; and simultaneously informs the newly added data node 30 to switch the third copy of the corresponding data slice to the second copy.
Step S405: notifying all data nodes 30 to delete the third copy of all local data fragments;
step S406: and after the deletion of all the data nodes 30 is confirmed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Fig. 7 is a flowchart of a preferred embodiment of a node capacity reduction operation in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S501: the control node 10 calculates a first replica data fragment list and a second replica data fragment list of each remaining data node 30; the method specifically comprises the following steps:
dividing the total number of the data fragments by the number of the remaining data nodes 30 to calculate the average number of the data fragments to be stored by each data node 30 in the remaining data nodes 30, wherein the average number of the data fragments to be stored is more than that before the nodes are reduced;
subtracting the current data fragment number of each residual data node 30 from the average data fragment number, and calculating the data fragment number to be migrated from the node to be closed on each residual data node 30;
according to a preset data fragment distribution principle, distributing a first copy and a second copy of a data fragment on a data node 30 to be deleted to the remaining data nodes 30 to obtain a first copy data fragment list and a second copy data list on each remaining node;
step S502: distributing a third copy for the data fragment to be migrated on the remaining data nodes 30, recalculating the data route of the system and broadcasting;
step S503: waiting for the remaining data nodes 30 to recover data;
the data node 30 data recovery process is shown in fig. 8;
step S504: waiting for the remaining data nodes 30 to copy the data;
the process of the data node 30 replicating data is shown in fig. 5;
step S505: receiving the self state reported by the residual data nodes 30, recalculating the data route of the system and broadcasting according to the preset capacity reduction rule;
the preset capacity reduction rule is as follows:
informing the data node to be deleted 30 to switch the first copy of the data to be migrated into the third copy; simultaneously informing the remaining data nodes 30 storing the third copy of the data fragment to switch the third copy of the data fragment into the first copy;
notifying the to-be-deleted data node 30 to switch the second copy of the to-be-migrated data segment to the third copy; and simultaneously informing the remaining data nodes 30 storing the third copy of the data fragment to switch the third copy of the data fragment to the second copy.
Step S506: notifying all data nodes 30 to delete the third copy of all local data fragments;
step S507: and after the deletion of all the data nodes 30 is confirmed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Fig. 8 is a flowchart illustrating a preferred embodiment of a data node data recovery process in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S601: inquiring a local data route, and acquiring a data node 30 where a third copy of the first copy data fragment is located on the node;
step S602: copying the corresponding data fragment to the data node 30 where the third copy is located;
the data node 30 which receives the data fragment stores the received data fragment into the corresponding third copy;
step S603: and after all the first copy data fragments are recovered, reporting the self state to the control node 10.
The above description is only for the preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and all equivalent structures or flow transformations made by the present specification and drawings, or applied directly or indirectly to other related arts, are included in the scope of the present invention.