Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments in the present specification, and not all of the embodiments. All other embodiments that can be obtained by a person skilled in the art on the basis of one or more embodiments of the present description without inventive step shall fall within the scope of protection of the embodiments of the present description.
The following describes an embodiment of the present disclosure with a specific application scenario as an example. Specifically, fig. 1 is a schematic flow chart of an embodiment of a page generation method provided in this specification. Although the present specification provides the method steps or apparatus structures as shown in the following examples or figures, more or less steps or modules may be included in the method or apparatus structures based on conventional or non-inventive efforts. In the case of steps or structures which do not logically have the necessary cause and effect relationship, the execution order of the steps or the block structure of the apparatus is not limited to the execution order or the block structure shown in the embodiments or the drawings of the present specification. When the described method or module structure is applied to a device, a server or an end product in practice, the method or module structure according to the embodiment or the figures may be executed sequentially or in parallel (for example, in a parallel processor or multi-thread processing environment, or even in an implementation environment including distributed processing and server clustering).
One embodiment provided by the present description may be applied to a server, a page access system, and the like. The server may include a single computer device, or may include a server cluster (e.g., a Web server cluster) formed by a plurality of servers, or a server structure of a distributed system, and the like.
It should be noted that the following description of the embodiments does not limit the technical solutions in other extensible application scenarios based on the present specification. In a specific embodiment, as shown in fig. 1, in an embodiment of a page generation method provided in this specification, the method may include the following steps.
S0: receiving a page access request; wherein, the page access request includes a page identifier.
In an embodiment of the present description, a server may receive a page access request. The page access request may include a page identifier. Wherein the page identification may be used to identify the page. The page identifier may be a url (uniform Resource locator), such as www.icbc.com.cn/a certain line top page, http:// www.icbc.com.cn/pages/abc. html, etc., and of course, the page identifier may also be other characters capable of uniquely identifying the page, and other modifications may be made by those skilled in the art in light of the technical spirit of the present application, but the present application is within the scope of the present application as long as the functions and effects achieved by the page identifier are the same as or similar to those of the present application.
In some embodiments, the server may be a Web server cluster and, accordingly, the page access request may be initiated by a user based on a Web browser. Of course, the above description is only exemplary, the manner of initiating the page access request is not limited to the above examples, and other modifications are possible for those skilled in the art in light of the technical spirit of the present application, and all that can be achieved is intended to be covered by the scope of the present application as long as the functions and effects achieved by the present application are the same as or similar to those of the present application.
S2: and judging whether page cache information corresponding to the page identifier exists in the primary cache or not according to the page identifier.
In this embodiment, after receiving the page access request, the server may determine whether page cache information corresponding to the page identifier exists in the first-level cache according to the page identifier. Wherein the level one cache may be used to store page cache information. The first level cache may limit the upper limit of the cache capacity using an lru (least recent used) algorithm. In some implementation scenarios, the Cache mode of the first-level Cache may be a Proxy Cache of a nxinx Cache, and may be stored in a memory to accelerate access of the Cache. LRU is a commonly used page replacement algorithm that selects the least recently used page to be evicted. Nginx is a high performance HTTP and reverse proxy Web server.
The page cache information may be understood as a complete page result, which may be a page result of html (hypertext tagged language). Among them, HTML is a language constituting a web document.
In some embodiments, before determining whether the page cache information corresponding to the page identifier exists in the first-level cache according to the page identifier, a first server node may be determined by using a consistent hash algorithm according to the page identifier; correspondingly, whether page cache information corresponding to the page identifier exists in the first-level cache of the first service node is judged according to the page identifier.
In some implementation scenarios, before determining whether the first-level cache has the page cache information corresponding to the page identifier according to the page identifier, the page access request may be routed to the first server node by using a consistent hash algorithm according to the page identifier, and then it is determined whether the first-level cache of the first service node has the page cache information corresponding to the page identifier.
In some implementation scenarios, the Web server cluster may be divided into different server nodes according to an actual scenario, the first server node may be one of the server nodes, and each node may be adjusted to multiple servers (containers) according to different traffic conditions, so as to support targeted capacity expansion of the hot spots at different times. For example, there are 7 servers, which can be divided into 3 nodes, wherein the first node includes 2 servers, the second node includes 3 servers, and the 3 rd node includes 2 servers.
The basic principle of the consistent hash algorithm can be understood as follows: firstly, the whole HASH value space is pre-partitioned into 2 to the power of 32 to form a HASH ring, then HASH values of different nodes are calculated (the HASH values of 3 nodes after being partitioned by the 7 servers can be respectively recorded as HASH1, HASH2 and HASH3), and the HASH values are mapped onto the HASH ring. And further calculating a hash value of the URL in the received page access request, and finding a closest node in a clockwise direction, which may be regarded as the first server node. In this way, after the first server node is determined, it may be determined whether page cache information corresponding to the page identifier exists in the first-level cache of the first service node.
In some embodiments, when it is determined that the page cache information corresponding to the page identifier exists in the first-level cache, the page cache information may be directly obtained and returned to the client that sends the page access request, so that the client can quickly display a complete page result.
In the embodiment of the present specification, by using a fragment routing capability (a consistent hash algorithm), cache contents can be only on a part of nodes, instead of a whole amount of caches being on the same server, so that under an abnormal condition, all access capabilities are not lost, and system availability is improved.
S4: and when the page definition data does not exist, acquiring page definition data from the distributed cache according to the page identifier.
In this embodiment of the present description, when it is determined that page cache information corresponding to a page identifier does not exist in a first-level cache, page definition data may be obtained from a distributed cache according to the page identifier.
The page definition data may be understood as metadata, which is data describing other data or structural data for providing information about a resource. Page template data (i.e., page content), components, data source information, etc. may be included in the page definition data. The page template data can be understood as an HTML code with placeholders. Such as:
...
< title > first Page </title >
...
< div > { { component 1} } } >
< div > { { component 2} } </div >
...
The components are simple packages of data and methods. The components may have their own properties and methods. An attribute is a simple visitor to component data. The method is some simple and visible function of the component. The use of components can enable drag-and-drop programming, fast property handling, and true object-oriented design. The component may include a component identification, component template data, component data, and the like. The component identification may also be referred to as a component program name, e.g., a list component. The component identification can identify the components by which it can be correspondingly determined which components are included in the page. A complete page result may be generated based on the page template data and the components included in the page. The component template data can be understood as an HTML code with placeholders. The component data may include component parameters, component data sources, and the like. Component parameters may include title, color, etc., which may be represented in a JSON object, such as { "title": "some form of landscape". The component data source may include a data source name, a data source parameter, etc., and the data source name may be represented by a character string, such as "certain row profile list data"; the data source parameters may include list line number, sort order, etc., which may be represented in a JSON object, such as { "line number": "20"}.
The data source information may be understood as data source information required to be used in the component, and may include a data source URL and a data source name. The data source name is an identification of the data source, such as "a certain row profile data". The data source URL may be requested by way of a GET over http. The data source URL comprises placeholder information of parameters in the component data source so as to replace the parameters, and the replaced complete URL can be used as a cached KEY value.
For example, in some implementations, the URL included in the received page access request is http:// www.icbc.com.cn/pages/abc. html, and the corresponding page definition data (i.e., metadata) may be:
in some embodiments, before obtaining the page definition data from the distributed cache according to the page identifier, it may be determined whether the page definition data exists in the distributed cache according to the page identifier; when the page identification does not exist, acquiring page definition data from a page publishing system database according to the page identification; further, the page definition data is stored in the distributed cache. The distributed cache may be a distributed data cache cluster, which may be used to store data source information required in the component. Of course, the data in the distributed cache may be updated in an irregular manner. In some implementations, the distributed cache may employ an LRU algorithm to limit the upper limit of the cache capacity. In some implementation scenarios, after the cache in the distributed cache fails, the cache data may be reestablished through the page delivery system database. Commonly used distributed data cache clusters may include Redis cache clusters and the like. The page publication system database may be a database cluster that may be used to store pages, components, data sources, and their associated data. In some implementation scenarios, the database may be a relational database, or may store page information in combination with a non-relational database.
S6: and extracting page template data and component identification from the page definition data.
In this embodiment of the present description, after obtaining page definition data from a distributed cache according to a page identifier, page template data and a component identifier may be extracted from the page definition data, so as to obtain corresponding component cache information according to the component identifier in the following, and further generate page cache information according to the page template data and the component cache information.
S8: and acquiring the component cache information from the second-level cache according to the component identifier.
In this embodiment of the present description, after the page template data and the component identifier are obtained, the component cache information may be obtained from the second-level cache according to the component identifier. Wherein the second level cache may be used to store component cache information. The second level cache may limit the upper limit of the cache capacity using an LRU algorithm. In some implementation scenarios, the Cache mode of the second-level Cache is a Proxy Cache of the second-level Cache, and the Proxy Cache can be stored in a memory to accelerate access of the Cache. The component cache information may be understood as a complete component, which may be a component result of html (hypertext tagged language).
In some embodiments, before obtaining the component cache information from the second level cache according to the component identifier, a second server node may be determined by using a consistent hash algorithm according to the component identifier; correspondingly, obtaining the component cache information from the second-level cache of the second server node according to the component identifier.
In some implementation scenarios, the Web server cluster may be divided into different server nodes according to an actual scenario, the second server node may be one of the server nodes, and each node may be adjusted to multiple servers (containers) according to different traffic conditions, so as to support targeted capacity expansion of the hot spots at different times. For example, there are 7 servers, which can be divided into 3 nodes, wherein the first node includes 2 servers, the second node includes 3 servers, and the 3 rd node includes 2 servers.
In some implementation scenarios, the whole HASH value space may be pre-sliced into 2 to the power of 32 to form a HASH ring, and then HASH values of different nodes (the HASH values of 3 nodes after the 7 servers are divided as HASH1, HASH2, and HASH3, respectively) are calculated and mapped onto the HASH ring. And further calculating the hash value of the component identifier, and finding the nearest node in the clockwise direction, wherein the node can be regarded as the second server node. Thus, after the second server node is determined, the component cache information may be obtained from the second level cache of the second server node according to the component identifier. Due to the fragment routing capability (the consistent hash algorithm), the cache content can be only on part of nodes, but not the whole cache is on the same server, so that all the access capability is not lost under abnormal conditions, and the system availability can be improved.
In some embodiments, before obtaining the component cache information from the second-level cache of the second server node according to the component identifier, it may be determined whether the component cache information exists in the second-level cache of the second server node according to the component identifier; when the component template data and the component data do not exist, the component template data and the component data can be obtained from the distributed cache according to the component identifier; assembling the component template data and the component data to generate component cache information; further, the component cache information is stored in a second level cache of the second server node.
Since the component cache information corresponding to the component identifier may exist in the second-level cache, or the component cache information corresponding to the component identifier may not exist, in some implementation scenarios, before the component cache information is obtained from the second-level cache, whether the corresponding component cache identifier exists in the second-level cache may be judged according to the component identifier, if so, the component cache information may be directly obtained therefrom, and if not, the information for generating the component cache information may be obtained, and the component cache information is generated by assembly. The information for generating the component cache information may include component template data and component data.
In some implementations, the information that generates the component cache information may be obtained from the distributed cache based on the component identification. Since the information for generating the component cache information may or may not exist in the distributed cache, in some implementation scenarios, it may be determined whether the information for generating the component cache information exists in the distributed cache according to the component identifier.
In some implementation scenarios, before obtaining the component template data and the component data from the distributed cache according to the component identifier, it may be determined whether the component template data and the component data exist in the distributed cache according to the component identifier; when the condition that the component template data and the component data do not exist is determined, the component template data and the component data can be obtained from a page publishing system database; and storing the component template data and the component data into the distributed cache. Therefore, when the component template data and the component data are needed, the component template data and the component data can be directly obtained from the distributed cache, and therefore the page rendering speed can be effectively improved.
In some implementation scenarios, after obtaining the component template data and the component data, the component template data and the component data may be assembled to generate component cache information.
In some implementation scenarios, after the component cache information is generated, the component cache information may be stored in the second-level cache of the second server node, so that the component cache information may be directly obtained from the second-level cache according to the component identifier, and the rendering response time of the client may be effectively improved.
S10: and assembling the page template data and the component cache information to generate page cache information.
In this embodiment of the present description, after obtaining the component cache information, the page template data and the component cache information may be assembled to generate the page cache information.
In some embodiments, after the page cache information is generated, the page cache information may be stored in the level one cache. Therefore, when the same access request is received subsequently, the corresponding page cache information can be directly obtained from the first-level cache and returned to the client side sending the page access request, so that the client side can quickly display the complete page result.
Fig. 2 is a schematic diagram of an embodiment of page assembly provided in this specification, in which a part C1 represents a complete page result, and the complete page result may be cached in a level one cache, as shown in fig. 2. Section C2 represents a page template (page template data) that can be obtained via page definition data that can be stored in a page publishing system database, and that can also be cached in a distributed cache. Section C3 represents the cached results (component cache information) for each component in the page, which have been generated as HTML code in conjunction with the component data, including placeholders for the component's own parameters and component template data, which may be cached in a secondary cache. The part C4 represents a component template (component template data), which may be cached in a distributed cache, and when not present, may be retrieved from the page distribution system database and loaded into the distributed cache. The part C5 represents component data, and may be cached in a distributed cache, and when the distributed cache does not exist, may be retrieved from the page distribution system database and loaded into the distributed cache. When the C1 moiety is absent, it can be obtained by assembling the C2 moiety with the C3 moiety. When the C3 moiety is absent, it can be obtained by assembling the C4 moiety with the C5 moiety. The page template and the component template correspond to programs, and assembly of the page and the components can be achieved according to the programs.
In some implementation scenarios, the page assembly may be implemented based on a page visualization editing apparatus. The page visualization editing device can place the ' assembly ' customized in advance in a page in a visualization dragging mode, and maintain various attributes of the ' assembly ' through the ' assembly attribute editor ', wherein the ' assembly attribute comprises ' data source ' attributes of data required to be presented by the assembly. Through maintenance of multiple components, the editing and assembly of a "page" can be accomplished. Based on the page visualization editing device, the page definition data can be generated in real time in the editing process. Because the structured information is convenient to cache, the page definition data can be converted into page structured information to be stored. Fig. 3 is a schematic diagram of a page visualization editing apparatus provided in this specification, where the apparatus may include a component selection area, a page real-time preview area, and a component property editing area, the component selection area may include one or more components, such ascomponent 1,component 2,component 3, component 4, and the like, and the component is a partially visual interface program developed in advance and customized. Because elements in the page can be assembled by adopting the components, the user can select the components from the component selection area and drag the components to the page real-time preview area. The page real-time preview area can preview the page display effect in real time, and the layout position and the size of the dragged component can be adjusted. The component property edit section can set parameters (such as title) of the component itself, and data source parameters.
S12: and displaying a target page corresponding to the page cache information.
In this embodiment of the present description, after the page cache information is generated, a target page corresponding to the page cache information may be displayed. The target page may be understood as a page corresponding to the access request.
As shown in fig. 4, fig. 4 is a schematic flowchart of an embodiment of a page generation method provided in this specification, which is triggered by page access, and first receives a URL input of a page, determines whether a page cache exists in a primary cache according to the URL, returns a page result if the page cache exists in the primary cache, and ends the process. If the first-level cache does not have the page cache (page cache information), the page definition data (namely metadata) can be acquired from the distributed cache according to the URL, wherein if the distributed cache does not have the page definition data, the page definition data can be acquired from the database and stored in the distributed cache according to a URL source return request to the database (page publishing system database). Further, the page template and the component identifier may be extracted from the page definition data, whether a component cache (component cache information) exists in the second level cache is determined according to the component identifier, if the component cache exists in the second level cache, the page template and the component cache may be assembled into a page and stored in the first level cache, and then a page result is returned, and the process is ended. If the component cache does not exist in the second-level cache, the component template data and the component data can be obtained from the distributed cache according to the component identification, then the components are assembled to generate the components, and the components are stored in the second-level cache. And after all the components establish cache in the second-level cache, assembling the page template and the component cache into a page, storing the page in the first-level cache, returning a page result, and ending the process. If the distributed cache does not have the component template data and the component data, the component template data and the component data can be obtained from the database and stored in the distributed cache according to the URL source return request to the database (the page publishing system database). The source returning request can adopt a synchronous lock to perform concurrent access, namely only 1 request performs source returning database operation each time, and other requests are fetched from the distributed cache after waiting for the request to return.
In some embodiments, the server may also receive cache invalidation information; the cache invalidation information comprises an invalidation page identifier; and deleting the corresponding data in the first-level cache, the distributed cache and the second-level cache according to the failure page identification.
In some implementation scenarios, when a page is modified or other content data is modified, a cache invalidation event corresponding to the page or data source data may be triggered, and cache invalidation information may be sent to the server. After receiving the cache invalidation information, the server may first invalidate or update the associated data source data (e.g., data in the database of the page distribution system) according to the invalidation page identifier included in the cache invalidation information, then invalidate the associated component cache (e.g., data in the second-level cache and distributed cache data), and finally invalidate the page cache (e.g., data in the cache and distributed cache data) according to the cache policy. The caching policy may include an invalidation policy according to traffic access time, an invalidation policy according to absolute time, a policy of caching all the time, and the like, and may be specifically set according to an actual scene. For example, a content page (e.g., a news or promotional event) has a large base, and after an article is sent out, the content page is generally modified little, and a certain hot spot access situation exists, so that a revocation policy according to a traffic access time (e.g., automatically revoking after 5 minutes of non-access by a person) can be adopted. For another example, the top page, the most important page, and an aggregated page, may generally adopt a policy of always caching, do not automatically invalidate, and when a modification triggering cache invalidation event occurs, may enable it to actively update the first level cache complete page result. In addition, since the timeliness is not very high, the first page may also adopt an absolute time invalidation strategy (e.g., fix every 5 minutes, actively update the first page). Of course, the above description is only exemplary, the caching strategy is not limited to the above examples, and other modifications are possible for those skilled in the art in light of the technical spirit of the present application, and all that can be achieved is intended to be covered by the scope of the present application as long as the achieved functions and effects are the same as or similar to the present application.
In the embodiment of the specification, the page can be locally and dynamically updated through a dynamic failure mechanism, so that the change response efficiency of each component is improved, and the ever-changing business requirements are met.
The embodiment of the specification can be realized based on a multi-layer page caching mechanism, a high-performance page template real-time rendering technology and a flexible page visualization editing technology.
In the embodiment of the specification, a complete page result is generated by a high-performance real-time template rendering technology of a server, and compared with a page which asynchronously requests data by multiple ajax, the SEO (search Engine optimization) optimization characteristic of a static page is maintained, and the rendering response time of a client and the flexibility of page generation are improved; the stability and the availability of the system are enhanced through Web server caching, a distributed caching technology and a multi-layer page content access structure of a database; the transverse expansion and contraction capacity of the Web server cache is realized through a consistent hash algorithm and a multi-level cache mode.
In the embodiment of the specification, the memory cache is used for replacing the staticizing of the disk file, and the fragmentation technology is combined, so that the transverse expansion capability of the server can be improved, the PaaS cloud native deployment mode can be better met, and the dynamic expansion and contraction capacity can be carried out, so that the system architecture is tamped, and the system availability is improved.
Because the current static method of the Web page is that a plurality of Web servers (physical machines) store the same content (for example, each machine has 500G and completely the same page file) through a disk to realize the concurrency capability of large-flow access, when the machines are to be expanded, the machines need to be built from head to tail, and the data on the disk can be formally put on line only by migration, so that the transverse capacity expansion cannot be rapidly performed. According to the page generation method provided by the specification, firstly, a containerized Web server can be rapidly deployed on the basis of a PaaS platform (the PaaS platform is characterized in that a container can be rapidly started and stopped, built-in storage data can be completely lost, general data is placed in external storage equipment, and the container does not have too large disk space); secondly, the cache scheme can be used for not only obtaining the quick access capability compared with the original staticized file, but also returning to the database (or distributed storage equipment) to remold the cache when the cache is not available or lost, thereby improving the availability of the system; furthermore, the cache content is only on part of nodes through the 'fragmentation' routing capability (the consistent hash algorithm) provided by the 'load balancing', and the whole amount of cache is not on the same node, so that all the access capability is not lost when an abnormal condition occurs, and when a machine is expanded, the cache of part of nodes is invalidated only by updating the fragmentation algorithm again, so that the availability of the system is improved.
It is to be understood that the foregoing is only exemplary, and the embodiments of the present disclosure are not limited to the above examples, and other modifications may be made by those skilled in the art within the spirit of the present disclosure, and the scope of the present disclosure is intended to be covered by the claims as long as the functions and effects achieved by the embodiments are the same as or similar to the present disclosure.
In the present specification, each embodiment of the method is described in a progressive manner, and the same and similar parts in each embodiment may be joined together, and each embodiment focuses on the differences from the other embodiments. Reference is made to the description of the method embodiments.
From the above description, it can be seen that the embodiment of the present application may receive a page access request, where the page access request includes a page identifier, determine whether page cache information corresponding to the page identifier exists in a first-level cache according to the page identifier, and when determining that the page cache information does not exist, may obtain page definition data from a distributed cache according to the page identifier. Further, page template data and component identification can be extracted from the page definition data, component cache information is obtained from the secondary cache according to the component identification, the page template data and the component cache information are assembled, page cache information is generated, and therefore a target page corresponding to the page cache information is displayed. Therefore, by adopting the embodiment of the specification, the assembled page can be dynamically and quickly generated, the horizontal expansion capability of the server can be improved, and the system availability is improved.
Based on the above page generation method, one or more embodiments of the present specification further provide a page access system. As shown in fig. 5, fig. 5 is a schematic structural diagram of a multi-level cache-based page access system provided in this specification, which can quickly respond to page content changes and implement horizontal expansion in a high-concurrency and high-throughput scenario, and can be used in portal websites of banks, governments, enterprises, and the like. The local first-level cache and the local second-level cache may be implemented by a web server, such as Nginx + java. The load balancing may be 4-tier network load balancing + 7-tier network load balancing, such as haproxy + Nginx. The distributed cache may be Java + Redis. The page publication system database may be a relational database + object storage cluster.
Specifically, B1 is a Web portal page management client, specifically, a Web browser, which may include a page visualization editing module and a page information structuring module, where the page visualization editing module is used for visualizing page editing; the page information structuring module is used for generating page structuring definition information (page definition data). Meanwhile, B1 may submit the page structured information to B2.
B2 is a page publishing system application server which can accept the page structure information submitted by B1 and submit the information to B3 through a page publishing module; in addition, B2 may issue a cache invalidation event to B4 via the cache invalidation module for the request for page modification.
B3 is a database cluster of a page publication system that can be used to store pages, components, data sources, and their related data.
B4 is a message queue that can be used to accept cache miss events and notify B5, B7 of cache misses. Common message queues may include Kafka, Redis publish-subscribe mechanism, among others.
B5 is a page access application server cluster, which can obtain the data content in the cache from B6 when receiving a page access request, and if B6 has no data, can obtain the data from B3, then store the data in B6 through a data cache management module, and then return the request; meanwhile, the B5 can also accept the cache failure event transferred by the B4 and delete or update the data in the B6.
B6 is a distributed data cache cluster, and can be used to store data corresponding to a data source. The cache may use an LRU algorithm to limit the upper limit of the cache capacity, and the cache data may be reestablished through B5 after the cache fails. Common distributed data cache clusters include Redis cache clusters.
B7 is a Web server cluster, which includes a cache management module and a page assembly module. Specifically, the complete page result can be obtained from the local primary cache according to the incoming page URL, if the local primary cache does not exist, the page definition data can be requested from B5, the cache result of the B11 request component can be accessed, and further, the page assembly module is used for assembling the page in real time. In addition, the B7 can accept the buffer failure event transferred by the B4 and delete the data in the B8 and the B9. The Web server with the cache and the page assembly module is known as Nginx, and the common template and rendering technology is a template technology based on Lua in the Nginx.
B8 is a local level one cache for storing full page results, which may be limited by an LRU algorithm to an upper limit of cache capacity. When the local level one cache does not exist, B7 may retrieve component cache information for page assembly back to the source request to the local level two cache.
B9 is a local level two cache for storing component cache information, which may be limited by an LRU algorithm to an upper limit of cache capacity. When the local secondary cache does not exist, B7 may return a source request to the distributed data cache cluster to obtain data and perform assembly of components.
B10 is a Web portal access client, specifically a Web browser, which can be accessed through a unique URL of a page.
B11 is a load balancing server that can route requests to different nodes in B7 using a consistent hashing algorithm based on URLs to obtain the ability of B7 to expand laterally. B11 may also accept internal requests for component content and route component identification. A common 7-tier network load balancing server is Nginx.
The system of the page access system based on the multi-level cache, which is provided by the embodiment of the specification, can provide more reliable and efficient access capability for the portal related websites and provide technical support for business activities such as advertisement, marketing, cultural propaganda and the like.
It should be noted that the description of the system according to the method embodiment may also include other embodiments, and specific implementation manners may refer to the description of the related method embodiment, which is not described herein in detail.
Based on the above page generation method, one or more embodiments of the present specification further provide a page generation apparatus. The apparatus may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in the embodiments of the present specification in conjunction with any necessary apparatus to implement the hardware. Based on the same innovative conception, embodiments of the present specification provide an apparatus as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific implementation of the apparatus in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Specifically, fig. 6 is a schematic block structure diagram of an embodiment of a page generating apparatus provided in this specification, and as shown in fig. 6, the page generating apparatus provided in this specification may include: the system comprises a receivingmodule 120, a judgingmodule 122, a first obtainingmodule 124, an extractingmodule 126, a second obtainingmodule 128, anassembling module 130 and a displayingmodule 132.
A receivingmodule 120, which may be configured to receive a page access request; wherein, the page access request comprises a page identifier;
the determiningmodule 122 may be configured to determine whether page cache information corresponding to the page identifier exists in the primary cache according to the page identifier;
the first obtainingmodule 124 may be configured to, when it is determined that the page definition data does not exist, obtain page definition data from the distributed cache according to the page identifier;
anextraction module 126, which may be configured to extract page template data and component identifications from the page definition data;
a second obtainingmodule 128, configured to obtain component cache information from the second level cache according to the component identifier;
anassembling module 130, configured to assemble the page template data and the component cache information to generate page cache information;
thepresentation module 132 may be configured to present a target page corresponding to the page cache information.
It should be noted that the above-mentioned description of the apparatus according to the method embodiment may also include other embodiments, and specific implementation manners may refer to the description of the related method embodiment, which is not described herein again.
This specification also provides an embodiment of a page generation device comprising a processor and a memory for storing processor-executable instructions, which when executed by the processor implement steps comprising: receiving a page access request; wherein, the page access request comprises a page identifier; judging whether page cache information corresponding to the page identifier exists in a primary cache or not according to the page identifier; when the page definition data does not exist, acquiring page definition data from a distributed cache according to the page identifier; extracting page template data and component identification from the page definition data; acquiring component cache information from a second-level cache according to the component identifier; assembling the page template data and the component cache information to generate page cache information; and displaying a target page corresponding to the page cache information.
It should be noted that the above-mentioned apparatuses may also include other embodiments according to the description of the method or apparatus embodiments. The specific implementation manner may refer to the description of the related method embodiment, and is not described in detail herein.
The method embodiments provided in the present specification may be executed in a mobile terminal, a computer terminal, a server or a similar computing device. Taking an example of the application on a server, fig. 7 is a block diagram of a hardware structure of an embodiment of a page generation server provided in this specification, where the server may be a page generation apparatus or a page generation device in the foregoing embodiment. As shown in fig. 7, theserver 10 may include one or more (only one shown) processors 100 (the processors 100 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 200 for storing data, and atransmission module 300 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration and is not intended to limit the structure of the electronic device. For example, theserver 10 may also include more or fewer components than shown in FIG. 7, and may also include other processing hardware, such as a database or multi-level cache, a GPU, or have a different configuration than shown in FIG. 7, for example.
The memory 200 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the page generation method in the embodiments of the present specification, and the processor 100 executes various functional applications and data processing by executing the software programs and modules stored in the memory 200. Memory 200 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 200 may further include memory located remotely from processor 100, which may be connected to a computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Thetransmission module 300 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, thetransmission module 300 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, thetransmission module 300 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The method or apparatus provided by the present specification and described in the foregoing embodiments may implement service logic through a computer program and record the service logic on a storage medium, where the storage medium may be read and executed by a computer, so as to implement the effect of the solution described in the embodiments of the present specification. The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
The embodiments of the page generation method or apparatus provided in this specification may be implemented in a computer by a processor executing corresponding program instructions, for example, implemented in a PC end using a c + + language of a windows operating system, implemented in a linux system, or implemented in an intelligent terminal using android, iOS system programming languages, implemented in processing logic based on a quantum computer, or the like.
It should be noted that descriptions of the apparatus, the device, and the system described above according to the related method embodiments may also include other embodiments, and specific implementations may refer to descriptions of corresponding method embodiments, which are not described in detail herein.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of some modules may be implemented in one or more software and/or hardware, or the modules implementing the same functions may be implemented by a plurality of sub-modules or sub-units, etc.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, systems according to embodiments of the invention. It will be understood that the implementation can be by computer program instructions which can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims.