Disclosure of Invention
In order to overcome the defects of the prior art, the method for reducing the time consumption of calling the service can reduce the time consumption of calling the service and improve the timeliness of the service.
The technical scheme adopted by the invention for solving the technical problems is as follows: the improvement of the method is that the method comprises a management platform, a zookeeper, a database and an Internet trading system, wherein the zookeeper comprises a plurality of node information, and the management platform can modify the node information of the zookeeper;
the internet system comprises a transaction module and a configuration module which can process transaction flow in a distributed mode, the transaction module sends an instruction to call data of the configuration module, the configuration module accesses a database according to instruction requirements, calls a query result and returns the called result to the transaction module, and the transaction module and the configuration module can update a cache according to the query result and monitor node change of the zookeeper.
As a further improvement of the above technical solution, the transaction module includes a first-level coffee local cache unit, a first-level redis cache cluster, and a product service module;
the product service module sends an instruction to call the data of the configuration module, the first-level caffeine local cache unit is inquired preferentially according to the data returned by the configuration module, and the first-level redis cache cluster is inquired when the first-level caffeine local cache unit cannot be inquired.
As a further improvement of the technical scheme, the result returned by the configuration module can be directly refreshed, the refreshed data is stored in the first-level caffeine local cache unit, the first-level caffeine local cache unit operates asynchronous refreshing data, and the refreshed data is stored in the first-level redis cache cluster.
As a further improvement of the above technical solution, the configuration module includes a second-level coffee local cache unit, a second-level redis cache cluster, and a sub-service module;
the sub-service module sends an instruction to call data of the database, the second-level caffeine local cache unit is inquired preferentially according to the data returned by the database, and the second-level redis cache cluster is inquired when the second-level caffeine local cache unit cannot be inquired.
As a further improvement of the technical scheme, the result returned by the database can be directly refreshed, the refreshed data is stored in the second-level caffeine local cache unit, the second-level caffeine local cache unit operates asynchronous refreshing data, and the refreshed data is stored in the second-level redis cache cluster.
As a further improvement of the above technical solution, the sub-service module includes a client cache unit.
As a further improvement of the above technical solution, the internet transaction system includes a SpringCache, the SpringCache forms a multi-level cache mechanism with a first-level cache unit and a first-level redis cache cluster, a first-time service processing result is preferentially stored in the first-level cache unit and an expiration time is set, the result is asynchronously refreshed to the first-level redis cache cluster to implement persistence, a next request is preferentially queried using a local cache, or the result is obtained by redis persistent data query;
the SpringCache, the second-level cache unit and the second-level redis cache cluster form a multi-level cache mechanism, the first service processing result is preferentially stored in the second-level cache unit and the expiration time is set, the result is asynchronously refreshed to the second-level redis cache cluster to realize persistence, the next request preferentially uses the local cache to inquire, or the result is obtained through redis persistent data inquiry.
As a further improvement of the technical scheme, the internet transaction system comprises a public interface, so that the transaction service processing result is conveniently stored in the cache.
As a further improvement of the technical scheme, the Internet transaction system comprises a monitor which is used for recording the abnormal condition of the cache at any time and facilitating the calling of fusing the cache by using the cache switch.
As a further improvement of the technical scheme, the Internet transaction system comprises a dynamic cache switch formed by combining an AOP section mechanism and thread variables.
As a further improvement of the technical scheme, the internet transaction system comprises a section interceptor arranged at the whole cache entrance.
The invention has the beneficial effects that:
1. the time consumption of service calling is reduced, and the timeliness of the service is improved;
2. network fluctuation, thread blocking and long time consumption of resource competition are avoided, and service stability is improved;
3. supporting distributed deployment, providing cluster service, and avoiding service failure caused by downtime of a single node;
4. data persistence, service updating or restarting in time, and effective data storage;
5. high concurrency of requests is supported, and the load capacity of the service is improved.
6. The method and the device are used for recording the abnormal condition of the cache at any time, and are convenient for using the cache switch to fuse the cache.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The conception, the specific structure, and the technical effects produced by the present invention will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the features, and the effects of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and those skilled in the art can obtain other embodiments without inventive effort based on the embodiments of the present invention, and all embodiments are within the protection scope of the present invention. In addition, all the connection/connection relations referred to in the patent do not mean that the components are directly connected, but mean that a better connection structure can be formed by adding or reducing connection auxiliary components according to specific implementation conditions. All technical characteristics in the invention can be interactively combined on the premise of not conflicting with each other.
The invention discloses a method for reducing time consumption of calling service, which comprises a management platform, a zookeeper, a database and an Internet transaction system, wherein the zookeeper comprises a plurality of node information, the management platform can modify the node information of the zookeeper, the Internet system comprises a transaction module and a configuration module which can process transaction flow in a distributed mode, the transaction module sends an instruction to call the data of the configuration module, the configuration module accesses the database according to the instruction requirement, calls a query result and returns the called result to the transaction module, and the transaction module and the configuration module can update a cache according to the query result and monitor the node change of the zookeeper.
In the embodiment, the invention utilizes the distributed processing of the transaction flow (the cache middleware) to improve the timeliness of the transaction flow in the service processing, and meanwhile, the processing flow is optimized on the basis, so that the task resources are reasonably and effectively distributed, and the market competitiveness of the product is greatly improved.
Further, referring to fig. 1, the transaction module includes a first-level cafeine local cache unit, a first-level redis cache cluster, and a product service module, where the product service module sends an instruction to call data of the configuration module, preferentially queries the first-level cafeine local cache unit according to data returned by the configuration module, and queries the first-level redis cache cluster when the first-level cafeine local cache unit is not queried. The result returned by the configuration module can be directly refreshed, the refreshed data is stored in the first-level caffeine local cache unit, the first-level caffeine local cache unit operates asynchronous refreshing data, and the refreshed data is stored in the first-level redis cache cluster.
Further, the configuration module comprises a second-level cafeine local cache unit, a second-level redis cache cluster and a sub-service module, wherein the sub-service module sends an instruction to call the data of the database, preferentially queries the second-level cafeine local cache unit according to the data returned by the database, and queries the second-level redis cache cluster when the second-level cafeine local cache unit cannot be queried. The result returned by the database can be directly refreshed, the refreshed data is stored in the second-level caffeine local cache unit, the second-level caffeine local cache unit operates asynchronous refreshing data, and the refreshed data is stored in the second-level redis cache cluster.
The product service module is provided with a calling end for calling the data of the configuration module, and the sub-service module is provided with a service end for accessing the database. In order to ensure the high efficiency of product service, the optimization of reducing the time consumption and network overhead of sub-service is realized by caching the result service of the sub-system under the condition that a service link cannot be reduced, and meanwhile, the load capacity and stability of the product service are also improved to a great extent.
Further, the internet transaction system comprises a SpringCache, the SpringCache, a first-level cache unit and a first-level redis cache cluster form a multi-level cache mechanism, a first-time service processing result is preferentially stored in the first-level cache unit and the expiration time is set, the result is asynchronously refreshed to the first-level redis cache cluster to realize persistence, and the next request preferentially uses a local cache to perform query or obtains the result through redis persistent data query;
the SpringCache, the second-level cache unit and the second-level redis cache cluster form a multi-level cache mechanism, the first service processing result is preferentially stored in the second-level cache unit and the expiration time is set, the result is asynchronously refreshed to the second-level redis cache cluster to realize persistence, the next request preferentially uses the local cache to inquire, or the result is obtained through redis persistent data inquiry.
In the above embodiment, a springCache frame is introduced, and a multi-level cache mechanism is formed by using a set of cache local caches and a set of redis clusters. And preferentially storing the first service processing result into a caffeine local cache and setting an expiration time, and then asynchronously refreshing the result to a redis cluster in the same module with the caffeine local cache unit for caching and realizing persistence. When the request is made next time, the local cache is preferentially used for inquiring, and meanwhile, when the cache result is removed due to service restart, the local cache is expired or due to insufficient space and the like, the result can be inquired through redis persistent data. (supplement here: 1, when local cache is stored in memory, service restart resets service memory data, and redis stores number into cluster server and space is much larger than local cache, so that the above problem 2 does not exist, and when service time is consumed, local cache query is smaller than redis cache query (because of network time consumption), and redis cache query is smaller than calling interface to retrieve result (besides network time consumption, a pile of service logic is reprocessed)).
Furthermore, the internet transaction system comprises a public interface, so that the transaction service processing result can be conveniently stored in the cache. A distributed redis frame (a primary redis cache cluster and a secondary redis cache cluster) is introduced into an internet transaction system, a sentinel mode in the redis frame is adopted to inject services into the internet transaction system, a public interface is provided to facilitate the storage of transaction service processing results into a cache and subsequent query, and the consistency of transaction transactions is supported.
Further, the sub-service module comprises a client cache unit. Because a plurality of nodes support access in the product of the invention, in order to improve the hit rate of the cache, the cache is newly added at the client, and simultaneously, the corresponding cache, namely a client cache unit, is also newly added at the server. When one of the nodes is missed or the service is down, the service end can be ensured to provide normal service in a period of time.
In addition, the internet transaction system comprises a monitor, wherein the monitor is used for recording cache abnormal conditions at any time, so that the cache can be conveniently fused and called by using a cache switch, and meanwhile, considering that due to the existence of the cache, information change cannot be effective in real time for transaction information configuration, the relevant cache configuration will have certain delay when the information change is effective, therefore, based on the node information change of the zookeeper, a monitoring unit monitors the node at any time, the updating operation of the cache is triggered through the change condition of the node, and therefore, the updating can be triggered manually and actively by operation colleagues aiming at some special service configurations which need high timeliness.
Finally, the internet transaction system includes a dynamic cache switch formed by using the AOP tangent plane mechanism and the thread variables. The internet transaction system comprises a section interceptor arranged at the whole cache entrance. Considering that cache has certain risk, in order to ensure strong consistency of partial transaction information, a dynamic cache switch is formed by combining an AOP (automatic optical plane) mechanism and thread variables and is used for specifying transaction services with high requirement on timeliness. And marking the designated whole task chain as using the cache to carry out a series of transactions by using a section interceptor at the entrance of the whole cache, and simultaneously, in order to avoid the condition of cache abnormity after the cache is online in the cache scheme, marking the cache by using a dynamically configured cache switch, so as to ensure the stability of the transactions after the cache abnormity.
The invention has the beneficial effects that:
1. the time consumption of service calling is reduced, and the timeliness of the service is improved;
2. network fluctuation, thread blocking and long time consumption of resource competition are avoided, and service stability is improved;
3. supporting distributed deployment, providing cluster service, and avoiding service failure caused by downtime of a single node;
4. data persistence, service updating or restarting in time, and effective data storage;
5. high concurrency of requests is supported, and the load capacity of the service is improved;
6. the method and the device are used for recording the abnormal condition of the cache at any time, and are convenient for using the cache switch to fuse the cache.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.