Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Different models of network devices represent different levels of configuration. The user typically selects the model of the network device in view of the configuration parameters of the network device. For example, the user selects a firewall device based on configuration parameters such as firewall throughput (number of packets processed per second), maximum number of concurrent connections, number of newly-established connections, vpn performance, and the like.
The conventional large page allocation method determines the allocation of large pages according to the model of the device. That is, each model of network device corresponds to a specific large page allocation number. When the network equipment is started, the number of large pages allocated to the system can be determined after the model is detected. This is considered that a network device with a higher configured model can support a larger number of concurrent connections, and therefore, a larger page memory needs to be reserved as a storage space of the session table. Therefore, in the conventional allocation method, a fixed large page number substantially corresponding to the configuration of each model is preset for each model of network device.
However, sometimes users purchase higher configured models simply for the need for other parameters (e.g., vpn performance) than the maximum number of concurrent connections. In this case, in practical applications, the allocation of large pages may be excessive, resulting in waste of resources. This is because concurrent connections need to occupy a large page of memory, while some configuration parameters (e.g., vpn, new connections, ips, etc.) do not need to occupy a large page of memory. The larger pages are allocated more, and less memory is allocated for other resources. If the user does not need too many concurrent connections, it is possible to not allocate as many pages as appropriate for a high configuration model, and thus more resources are available for other configuration parameters.
In view of the above, the inventors contemplate that a large number of pages may be allocated to the system based on the maximum number of concurrent connections that the network device is authorized to use. That is, large pages are allocated according to a payment of the maximum number of concurrent connections that the user uses. In this way, it is possible to allocate large pages more reasonably closer to the reality of the number of concurrent connections when the user uses the network device.
FIG. 1 is a flow diagram of a method for large page allocation provided by an exemplary embodiment. The method is applied to the network equipment. The network devices may include, for example, firewall devices, router devices, and the like. As shown in fig. 1, the method may include the following steps.
In step S11, the maximum number of concurrent connections that the network device is authorized to use is obtained at the time of startup of the network device.
The network devices have different models according to different memories, for example, neteye firewall devices include 4G, 8G, 16G and 32G models. The model of the network device embodies the maximum memory that can be achieved according to the hardware condition of the network device, and also embodies the maximum number of concurrent connections that can be provided according to the hardware condition of the network device. After purchasing the network device, the user needs to purchase the maximum number of concurrent connections according to the own needs. The manufacturer of the network device authorizes the maximum number of concurrent connections for the network device based on the payment made by the user. Generally, the maximum number of concurrent connections authorized to be used by a network device is set in a license of a system by a manufacturer, that is, the maximum number of concurrent connections set in the license of a network device is the maximum number of concurrent connections authorized to be used by the network device.
The maximum number of concurrent connections may be a maximum number of concurrent connections set in a license of the network device by a manufacturer of the network device according to a current payment condition of the user. It can be understood that, in a network device, the maximum number of concurrent connections in the license is less than or equal to the maximum number of concurrent connections that the network device can provide according to its hardware condition.
Typically, the user will pay for the maximum number of concurrent connections of the network device based on his or her own needs. The maximum number of concurrent connections in the network device license is closely related to the payment condition of the user, so the maximum number of concurrent connections is relatively close to the number of concurrent connections in practical application.
The maximum number of concurrent connections described below refers to the maximum number of concurrent connections that the network device is authorized to use.
In step S12, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections.
In this embodiment, the maximum number of concurrent connections authorized to be used by the network device may correspond to a large number of pages, similar to the model of the device corresponding to a large number of pages in the related art. For example, it can be simply implemented as: the method comprises the steps of firstly setting the large page number corresponding to each maximum concurrent connection number, and directly determining the corresponding large page number as the large page number required by the system when the maximum concurrent connection number authorized to be used by the network equipment is obtained.
In step S13, the determined large page number is allocated to the system.
Through the technical scheme, the large page number is allocated according to the maximum concurrent connection number authorized to be used by the network equipment. Because the maximum concurrent connection number can reflect the actual use condition of the system memory better than the machine type, the number of the large pages is distributed according to the maximum concurrent connection number, the problem that the network equipment cannot be started due to insufficient large page distribution can be avoided, and the resource waste caused by excessive large page distribution is avoided to a certain extent.
In order to more accurately allocate the large pages, the number of large pages required by the system may be determined by further considering the size of the large pages and the size of each session. FIG. 2 is a flow chart of a method for large page allocation provided by another exemplary embodiment. On the basis of fig. 1, as shown in fig. 2, simultaneously with step S11, and before step S121, the method may further include the following steps.
In step S111, the size of the large page in the network device is acquired.
In step S112, the size of each session in the network device is acquired.
In this embodiment, the step of determining the number of large pages required by the system of network devices based on the maximum number of concurrent connections (step S12) may include step S121.
In step S121, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections, the size of the large pages in the network device, and the size of each session in the network device.
In this embodiment, the number of large pages required by the system can be determined, for example, by:
wherein HnumThe number of large pages required by the system of network devices; n is the maximum number of concurrent connections; sess is the size of each session in the network device; hsizeIs the size of a large page in the network device.
Wherein the size H of the large page in the network devicesizeMay be predetermined. For example, the size H of the large page currently used by Neteye family network devicessize2M. The size Sess of each session in the network device may be set to be different depending on models. For example, in 4G and 8G models of Neteye-series network devices, the size of each session is Sess 560B; in the 16G and 32G models, the size of each session is Sess ═ 704B.
For example, when the maximum number of concurrent connections set in the license is n ═ 5,000,000, the size of each session is Sess ═ 560B, and the size of the large page is H
sizeWhen 2M, the number of large pages required for the system of network devices is
In the embodiment, on the basis of the maximum number of concurrent connections, the number of large pages required by the system is determined by considering the size of the large pages and the size of each session, so that the allocated large pages better meet the actual requirements, and the waste of resources is reduced while the normal operation of the network equipment is ensured.
In the embodiment shown in fig. 2, step S111, step S112 and step S11 are performed simultaneously. It is understood that in other embodiments, the three steps may be performed in any order as long as they precede step S121.
In yet another embodiment of the present disclosure, the number of large pages required by the system may be determined by further considering the size of the reserved memory. FIG. 3 is a flowchart of a method for large page allocation provided by yet another exemplary embodiment. On the basis of fig. 1, as shown in fig. 3, simultaneously with step S11 and before step S122, the method may further include the following steps.
In step S111, the size of the large page in the network device is acquired.
In step S112, the size of each session in the network device is acquired.
In step S113, the size of the reserved memory in the network device is determined.
In this embodiment, the step of determining the number of large pages required by the system of network devices based on the maximum number of concurrent connections (step S12) may include step S122.
In step S122, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections, the size of the large pages in the network device, the size of each session in the network device, and the size of the reserved memory.
In this embodiment, the number of large pages required by the system can be determined, for example, by:
wherein x is the size of the reserved memory in the network device. The reserved memory includes a memory reserved for other resources in the network device, and different reserved memories can be preset according to different device models. For example, in 4G and 8G models of Neteye-series firewall devices, the size of the reserved memory is 300M; in the 16G and 32G models, the reserved memory size is 500M.
In the embodiment, on the basis of the maximum number of concurrent connections, the number of large pages required by the system is determined by considering the size of the large pages, the size of each session and the size of the reserved memory, so that the allocated large pages better meet the actual requirements, and the resource waste is reduced while the normal operation of the network equipment is ensured.
In the embodiment shown in fig. 3, step S111, step S112, step S113 and step S11 are performed simultaneously. It is understood that in other embodiments, the four steps may be performed in any order as long as they precede step S122.
In yet another embodiment of the present disclosure, the number of large pages required by the system may be determined by further considering that the actual demand due to the memory hole is greater than the calculated demand. FIG. 4 is a flowchart of a method for large page allocation provided by yet another exemplary embodiment. On the basis of fig. 3, as shown in fig. 4, after step S11, and before step S1221, the method may further include step S110.
In step S110, a hole factor is determined according to the maximum number of concurrent connections.
The hole factor represents a coefficient of a difference between an actually required memory and a calculated memory, which is caused by a memory hole, and can be obtained by experience or experiment. The hole factor may be associated with a maximum number of concurrencies. For example, a first hole factor may be taken when the maximum number of concurrencies is less than a predetermined threshold, and a second hole factor may be taken when the maximum number of concurrencies is greater than the predetermined threshold, wherein the first hole factor is less than the second hole factor.
In this embodiment, the step of determining the number of large pages required by the system of the network device (step S122) according to the maximum number of concurrent connections, the size of the large page in the network device, the size of each session in the network device, and the size of the reserved memory may include step S1221.
In step S1221, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections, the size of large pages in the network device, the size of each session in the network device, the size of the reserved memory, and the hole factor.
In the embodiment shown in fig. 4, step S111, step S112, step S113 and step S110 are performed simultaneously. It is understood that in other embodiments, the four steps may be performed in any order as long as they precede step S1221.
In an embodiment of the present disclosure, step S1221 may determine the number of large pages required by the system by:
wherein α is a void factor and 0 < α < 1. for example, α can be 0.28 when the maximum concurrency number set in license is less than 5,000,000, and α can be 0.3 when the maximum concurrency number set in license is greater than 5,000,000.
In the embodiment, on the basis of the maximum number of concurrent connections, the size of a large page, the size of each session and the size of a reserved memory, the number of large pages required by the system is determined by considering the hole factor, so that the allocated large number of pages better meets the actual requirement, and the resource waste is reduced while the normal operation of the network equipment is ensured.
The disclosure also provides a device for large page allocation, which is applied to network equipment. FIG. 5 is a block diagram of an apparatus for large page allocation provided by an exemplary embodiment. As shown in fig. 5, theapparatus 10 for large page allocation may include a maximum concurrent connectionnumber acquisition module 11, a large pagenumber determination module 12, and anallocation module 13.
The maximum concurrent connectionnumber obtaining module 11 is configured to obtain a maximum concurrent connection number authorized to be used by the network device when the network device is started.
The large pagenumber determining module 12 is configured to determine the number of large pages required by the network device system according to the maximum number of concurrent connections.
Theallocation module 13 is used to allocate the determined number of large pages for the system.
Optionally, theapparatus 10 may further include a large page size obtaining module and a session size obtaining module.
And the large page size obtaining module is used for obtaining the size of the large page in the network equipment.
A session size obtaining module, configured to obtain a size of each session in the network device.
In this embodiment, the large pagecount determination module 12 may include a first large page count determination sub-module.
The first large page number determining submodule is used for determining the number of large pages required by the system of the network equipment according to the maximum concurrent connection number, the size of the large pages in the network equipment and the size of each session in the network equipment.
Optionally, theapparatus 10 may further include a large page size obtaining module, a session size obtaining module, and a reserved memory determining module.
The large page size obtaining module is used for obtaining the size of the large page in the network equipment.
The session size obtaining module is used for obtaining the size of each session in the network equipment.
And the reserved memory determining module is used for determining the size of the reserved memory in the network equipment.
In this embodiment, the large pagecount determination module 12 may include a second large page count determination sub-module.
The second large page count determining submodule may be configured to determine the number of large pages required by the system of the network device according to the maximum number of concurrent connections, the size of large pages in the network device, the size of each session in the network device, and the size of the reserved memory.
Optionally, theapparatus 10 may further include a hole factor determination module.
The cavity factor determining module may be configured to determine a cavity factor according to the maximum number of concurrent connections.
In this embodiment, the second large page count determination submodule may include a third large page count determination submodule.
And the third large page number determining submodule is used for determining the number of large pages required by the system of the network equipment according to the maximum concurrent connection number, the size of large pages in the network equipment, the size of each session in the network equipment, the size of the reserved memory and the hole factor.
Alternatively, the third large page count determination sub-module may determine the number of large pages required by the system of the network device by:
wherein HnumThe number of large pages required by the system of the network equipment, n is the maximum concurrent connection number, Sess is the size of each session in the network equipment, x is the size of the reserved memory in the network equipment, α is the hole factor, HsizeIs the size of a large page in the network device.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Through the technical scheme, the large page number is allocated according to the maximum concurrent connection number authorized to be used by the network equipment. Because the maximum concurrent connection number can reflect the actual use condition of the system memory better than the machine type, the number of the large pages is distributed according to the maximum concurrent connection number, the problem that the network equipment cannot be started due to insufficient large page distribution can be avoided, and the resource waste caused by excessive large page distribution is avoided to a certain extent.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.