Movatterモバイル変換


[0]ホーム

URL:


CN106649148B - Method and apparatus for large page allocation - Google Patents

Method and apparatus for large page allocation
Download PDF

Info

Publication number
CN106649148B
CN106649148BCN201610889747.6ACN201610889747ACN106649148BCN 106649148 BCN106649148 BCN 106649148BCN 201610889747 ACN201610889747 ACN 201610889747ACN 106649148 BCN106649148 BCN 106649148B
Authority
CN
China
Prior art keywords
size
network device
network equipment
maximum
large page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610889747.6A
Other languages
Chinese (zh)
Other versions
CN106649148A (en
Inventor
刘芳宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft CorpfiledCriticalNeusoft Corp
Priority to CN201610889747.6ApriorityCriticalpatent/CN106649148B/en
Publication of CN106649148ApublicationCriticalpatent/CN106649148A/en
Application grantedgrantedCritical
Publication of CN106649148BpublicationCriticalpatent/CN106649148B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The disclosure provides a method and a device for large page allocation, which are applied to network equipment. The method comprises the following steps: when the network equipment is started, acquiring the maximum number of concurrent connections authorized to be used by the network equipment; determining the number of large pages required by the system of the network equipment according to the maximum concurrent connection number; the determined number of large pages is allocated for the system. Therefore, the problem that the network equipment cannot be started due to insufficient large page allocation can be avoided, and resource waste caused by excessive large page allocation is avoided to a certain extent.

Description

Method and apparatus for large page allocation
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method and apparatus for large page allocation.
Background
Large pages (HugePages) refer to large memory pages. In the network device, a large page needs to be allocated in advance at the time of system startup. By allocating large pages, the number of cache entries can be reduced, the cache hit rate can be increased, and the efficiency of memory address translation can be increased, thereby increasing the operating efficiency of the memory.
In a traditional large page allocation method, the model of a network device (for example, Neteye series firewall devices include 4G, 8G, 16G and 32G models) is judged in a boot script, and the allocation of large pages is determined according to the model of the device. Once the model of the network device is determined, the number of allocated large pages is also determined.
However, the number of large pages allocated according to the model may be insufficient or excessive. Insufficient large pages may cause the network device to be unable to start up, and excessive large page allocation may cause resource waste.
Disclosure of Invention
The purpose of the present disclosure is to provide a simple and easy method and apparatus for large page allocation.
In order to achieve the above object, the present disclosure provides a method for large page allocation, applied to a network device. The method comprises the following steps: when the network equipment is started, acquiring the maximum number of concurrent connections authorized to be used by the network equipment; determining the number of large pages required by the system of the network equipment according to the maximum concurrent connection number; the determined number of large pages is allocated for the system.
Optionally, the method further comprises: acquiring the size of a large page in the network equipment; obtaining the size of each session in the network equipment;
the step of determining the number of large pages required by the system of the network device according to the maximum number of concurrent connections comprises: determining a number of large pages required by a system of the network device according to the maximum number of concurrent connections, a size of large pages in the network device, and a size of each session in the network device.
Optionally, the method further comprises: acquiring the size of a large page in the network equipment; obtaining the size of each session in the network equipment; determining the size of a reserved memory in the network equipment;
the step of determining the number of large pages required by the system of the network device according to the maximum number of concurrent connections comprises: determining the number of large pages required by the system of the network device according to the maximum number of concurrent connections, the size of the large pages in the network device, the size of each session in the network device, and the size of the reserved memory.
Optionally, the method further comprises: determining a hole factor according to the maximum number of concurrent connections;
the step of determining the number of large pages required by the system of the network device according to the maximum number of concurrent connections, the size of large pages in the network device, the size of each session in the network device, and the size of the reserved memory includes: determining the number of large pages required by the system of the network device according to the maximum concurrent connection number, the size of the large pages in the network device, the size of each session in the network device, the size of the reserved memory, and the hole factor.
Optionally, the step of determining the number of large pages required by the system of the network device according to the maximum number of concurrent connections, the size of large pages in the network device, the size of each session in the network device, the size of the reserved memory, and the hole factor is performed by:
Figure BDA0001128754990000021
wherein,Hnumthe number of large pages required by the system of the network equipment, n is the maximum concurrent connection number, Sess is the size of each session in the network equipment, x is the size of the reserved memory in the network equipment, α is the hole factor, HsizeIs the size of a large page in the network device.
The disclosure also provides a device for large page allocation, which is applied to network equipment. The device comprises: the maximum concurrent connection number acquisition module is used for acquiring the maximum concurrent connection number authorized to be used by the network equipment when the network equipment is started; a large page number determining module, configured to determine, according to the maximum concurrent connection number, a large page number required by the network device system; an allocation module to allocate the determined number of large pages for the system.
Through the technical scheme, the large page number is allocated according to the maximum concurrent connection number authorized to be used by the network equipment. Because the maximum concurrent connection number can reflect the actual use condition of the system memory better than the machine type, the number of the large pages is distributed according to the maximum concurrent connection number, the problem that the network equipment cannot be started due to insufficient large page distribution can be avoided, and the resource waste caused by excessive large page distribution is avoided to a certain extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram of a method for large page allocation provided by an exemplary embodiment;
FIG. 2 is a flow chart of a method for large page allocation provided by another exemplary embodiment;
FIG. 3 is a flowchart of a method for large page allocation provided by yet another exemplary embodiment;
FIG. 4 is a flowchart of a method for large page allocation provided by yet another exemplary embodiment;
FIG. 5 is a block diagram of an apparatus for large page allocation provided by an exemplary embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Different models of network devices represent different levels of configuration. The user typically selects the model of the network device in view of the configuration parameters of the network device. For example, the user selects a firewall device based on configuration parameters such as firewall throughput (number of packets processed per second), maximum number of concurrent connections, number of newly-established connections, vpn performance, and the like.
The conventional large page allocation method determines the allocation of large pages according to the model of the device. That is, each model of network device corresponds to a specific large page allocation number. When the network equipment is started, the number of large pages allocated to the system can be determined after the model is detected. This is considered that a network device with a higher configured model can support a larger number of concurrent connections, and therefore, a larger page memory needs to be reserved as a storage space of the session table. Therefore, in the conventional allocation method, a fixed large page number substantially corresponding to the configuration of each model is preset for each model of network device.
However, sometimes users purchase higher configured models simply for the need for other parameters (e.g., vpn performance) than the maximum number of concurrent connections. In this case, in practical applications, the allocation of large pages may be excessive, resulting in waste of resources. This is because concurrent connections need to occupy a large page of memory, while some configuration parameters (e.g., vpn, new connections, ips, etc.) do not need to occupy a large page of memory. The larger pages are allocated more, and less memory is allocated for other resources. If the user does not need too many concurrent connections, it is possible to not allocate as many pages as appropriate for a high configuration model, and thus more resources are available for other configuration parameters.
In view of the above, the inventors contemplate that a large number of pages may be allocated to the system based on the maximum number of concurrent connections that the network device is authorized to use. That is, large pages are allocated according to a payment of the maximum number of concurrent connections that the user uses. In this way, it is possible to allocate large pages more reasonably closer to the reality of the number of concurrent connections when the user uses the network device.
FIG. 1 is a flow diagram of a method for large page allocation provided by an exemplary embodiment. The method is applied to the network equipment. The network devices may include, for example, firewall devices, router devices, and the like. As shown in fig. 1, the method may include the following steps.
In step S11, the maximum number of concurrent connections that the network device is authorized to use is obtained at the time of startup of the network device.
The network devices have different models according to different memories, for example, neteye firewall devices include 4G, 8G, 16G and 32G models. The model of the network device embodies the maximum memory that can be achieved according to the hardware condition of the network device, and also embodies the maximum number of concurrent connections that can be provided according to the hardware condition of the network device. After purchasing the network device, the user needs to purchase the maximum number of concurrent connections according to the own needs. The manufacturer of the network device authorizes the maximum number of concurrent connections for the network device based on the payment made by the user. Generally, the maximum number of concurrent connections authorized to be used by a network device is set in a license of a system by a manufacturer, that is, the maximum number of concurrent connections set in the license of a network device is the maximum number of concurrent connections authorized to be used by the network device.
The maximum number of concurrent connections may be a maximum number of concurrent connections set in a license of the network device by a manufacturer of the network device according to a current payment condition of the user. It can be understood that, in a network device, the maximum number of concurrent connections in the license is less than or equal to the maximum number of concurrent connections that the network device can provide according to its hardware condition.
Typically, the user will pay for the maximum number of concurrent connections of the network device based on his or her own needs. The maximum number of concurrent connections in the network device license is closely related to the payment condition of the user, so the maximum number of concurrent connections is relatively close to the number of concurrent connections in practical application.
The maximum number of concurrent connections described below refers to the maximum number of concurrent connections that the network device is authorized to use.
In step S12, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections.
In this embodiment, the maximum number of concurrent connections authorized to be used by the network device may correspond to a large number of pages, similar to the model of the device corresponding to a large number of pages in the related art. For example, it can be simply implemented as: the method comprises the steps of firstly setting the large page number corresponding to each maximum concurrent connection number, and directly determining the corresponding large page number as the large page number required by the system when the maximum concurrent connection number authorized to be used by the network equipment is obtained.
In step S13, the determined large page number is allocated to the system.
Through the technical scheme, the large page number is allocated according to the maximum concurrent connection number authorized to be used by the network equipment. Because the maximum concurrent connection number can reflect the actual use condition of the system memory better than the machine type, the number of the large pages is distributed according to the maximum concurrent connection number, the problem that the network equipment cannot be started due to insufficient large page distribution can be avoided, and the resource waste caused by excessive large page distribution is avoided to a certain extent.
In order to more accurately allocate the large pages, the number of large pages required by the system may be determined by further considering the size of the large pages and the size of each session. FIG. 2 is a flow chart of a method for large page allocation provided by another exemplary embodiment. On the basis of fig. 1, as shown in fig. 2, simultaneously with step S11, and before step S121, the method may further include the following steps.
In step S111, the size of the large page in the network device is acquired.
In step S112, the size of each session in the network device is acquired.
In this embodiment, the step of determining the number of large pages required by the system of network devices based on the maximum number of concurrent connections (step S12) may include step S121.
In step S121, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections, the size of the large pages in the network device, and the size of each session in the network device.
In this embodiment, the number of large pages required by the system can be determined, for example, by:
Figure BDA0001128754990000061
wherein HnumThe number of large pages required by the system of network devices; n is the maximum number of concurrent connections; sess is the size of each session in the network device; hsizeIs the size of a large page in the network device.
Wherein the size H of the large page in the network devicesizeMay be predetermined. For example, the size H of the large page currently used by Neteye family network devicessize2M. The size Sess of each session in the network device may be set to be different depending on models. For example, in 4G and 8G models of Neteye-series network devices, the size of each session is Sess 560B; in the 16G and 32G models, the size of each session is Sess ═ 704B.
For example, when the maximum number of concurrent connections set in the license is n ═ 5,000,000, the size of each session is Sess ═ 560B, and the size of the large page is HsizeWhen 2M, the number of large pages required for the system of network devices is
Figure BDA0001128754990000071
In the embodiment, on the basis of the maximum number of concurrent connections, the number of large pages required by the system is determined by considering the size of the large pages and the size of each session, so that the allocated large pages better meet the actual requirements, and the waste of resources is reduced while the normal operation of the network equipment is ensured.
In the embodiment shown in fig. 2, step S111, step S112 and step S11 are performed simultaneously. It is understood that in other embodiments, the three steps may be performed in any order as long as they precede step S121.
In yet another embodiment of the present disclosure, the number of large pages required by the system may be determined by further considering the size of the reserved memory. FIG. 3 is a flowchart of a method for large page allocation provided by yet another exemplary embodiment. On the basis of fig. 1, as shown in fig. 3, simultaneously with step S11 and before step S122, the method may further include the following steps.
In step S111, the size of the large page in the network device is acquired.
In step S112, the size of each session in the network device is acquired.
In step S113, the size of the reserved memory in the network device is determined.
In this embodiment, the step of determining the number of large pages required by the system of network devices based on the maximum number of concurrent connections (step S12) may include step S122.
In step S122, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections, the size of the large pages in the network device, the size of each session in the network device, and the size of the reserved memory.
In this embodiment, the number of large pages required by the system can be determined, for example, by:
Figure BDA0001128754990000072
wherein x is the size of the reserved memory in the network device. The reserved memory includes a memory reserved for other resources in the network device, and different reserved memories can be preset according to different device models. For example, in 4G and 8G models of Neteye-series firewall devices, the size of the reserved memory is 300M; in the 16G and 32G models, the reserved memory size is 500M.
In the embodiment, on the basis of the maximum number of concurrent connections, the number of large pages required by the system is determined by considering the size of the large pages, the size of each session and the size of the reserved memory, so that the allocated large pages better meet the actual requirements, and the resource waste is reduced while the normal operation of the network equipment is ensured.
In the embodiment shown in fig. 3, step S111, step S112, step S113 and step S11 are performed simultaneously. It is understood that in other embodiments, the four steps may be performed in any order as long as they precede step S122.
In yet another embodiment of the present disclosure, the number of large pages required by the system may be determined by further considering that the actual demand due to the memory hole is greater than the calculated demand. FIG. 4 is a flowchart of a method for large page allocation provided by yet another exemplary embodiment. On the basis of fig. 3, as shown in fig. 4, after step S11, and before step S1221, the method may further include step S110.
In step S110, a hole factor is determined according to the maximum number of concurrent connections.
The hole factor represents a coefficient of a difference between an actually required memory and a calculated memory, which is caused by a memory hole, and can be obtained by experience or experiment. The hole factor may be associated with a maximum number of concurrencies. For example, a first hole factor may be taken when the maximum number of concurrencies is less than a predetermined threshold, and a second hole factor may be taken when the maximum number of concurrencies is greater than the predetermined threshold, wherein the first hole factor is less than the second hole factor.
In this embodiment, the step of determining the number of large pages required by the system of the network device (step S122) according to the maximum number of concurrent connections, the size of the large page in the network device, the size of each session in the network device, and the size of the reserved memory may include step S1221.
In step S1221, the number of large pages required by the system of the network device is determined according to the maximum number of concurrent connections, the size of large pages in the network device, the size of each session in the network device, the size of the reserved memory, and the hole factor.
In the embodiment shown in fig. 4, step S111, step S112, step S113 and step S110 are performed simultaneously. It is understood that in other embodiments, the four steps may be performed in any order as long as they precede step S1221.
In an embodiment of the present disclosure, step S1221 may determine the number of large pages required by the system by:
Figure BDA0001128754990000091
wherein α is a void factor and 0 < α < 1. for example, α can be 0.28 when the maximum concurrency number set in license is less than 5,000,000, and α can be 0.3 when the maximum concurrency number set in license is greater than 5,000,000.
In the embodiment, on the basis of the maximum number of concurrent connections, the size of a large page, the size of each session and the size of a reserved memory, the number of large pages required by the system is determined by considering the hole factor, so that the allocated large number of pages better meets the actual requirement, and the resource waste is reduced while the normal operation of the network equipment is ensured.
The disclosure also provides a device for large page allocation, which is applied to network equipment. FIG. 5 is a block diagram of an apparatus for large page allocation provided by an exemplary embodiment. As shown in fig. 5, theapparatus 10 for large page allocation may include a maximum concurrent connectionnumber acquisition module 11, a large pagenumber determination module 12, and anallocation module 13.
The maximum concurrent connectionnumber obtaining module 11 is configured to obtain a maximum concurrent connection number authorized to be used by the network device when the network device is started.
The large pagenumber determining module 12 is configured to determine the number of large pages required by the network device system according to the maximum number of concurrent connections.
Theallocation module 13 is used to allocate the determined number of large pages for the system.
Optionally, theapparatus 10 may further include a large page size obtaining module and a session size obtaining module.
And the large page size obtaining module is used for obtaining the size of the large page in the network equipment.
A session size obtaining module, configured to obtain a size of each session in the network device.
In this embodiment, the large pagecount determination module 12 may include a first large page count determination sub-module.
The first large page number determining submodule is used for determining the number of large pages required by the system of the network equipment according to the maximum concurrent connection number, the size of the large pages in the network equipment and the size of each session in the network equipment.
Optionally, theapparatus 10 may further include a large page size obtaining module, a session size obtaining module, and a reserved memory determining module.
The large page size obtaining module is used for obtaining the size of the large page in the network equipment.
The session size obtaining module is used for obtaining the size of each session in the network equipment.
And the reserved memory determining module is used for determining the size of the reserved memory in the network equipment.
In this embodiment, the large pagecount determination module 12 may include a second large page count determination sub-module.
The second large page count determining submodule may be configured to determine the number of large pages required by the system of the network device according to the maximum number of concurrent connections, the size of large pages in the network device, the size of each session in the network device, and the size of the reserved memory.
Optionally, theapparatus 10 may further include a hole factor determination module.
The cavity factor determining module may be configured to determine a cavity factor according to the maximum number of concurrent connections.
In this embodiment, the second large page count determination submodule may include a third large page count determination submodule.
And the third large page number determining submodule is used for determining the number of large pages required by the system of the network equipment according to the maximum concurrent connection number, the size of large pages in the network equipment, the size of each session in the network equipment, the size of the reserved memory and the hole factor.
Alternatively, the third large page count determination sub-module may determine the number of large pages required by the system of the network device by:
Figure BDA0001128754990000111
wherein HnumThe number of large pages required by the system of the network equipment, n is the maximum concurrent connection number, Sess is the size of each session in the network equipment, x is the size of the reserved memory in the network equipment, α is the hole factor, HsizeIs the size of a large page in the network device.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Through the technical scheme, the large page number is allocated according to the maximum concurrent connection number authorized to be used by the network equipment. Because the maximum concurrent connection number can reflect the actual use condition of the system memory better than the machine type, the number of the large pages is distributed according to the maximum concurrent connection number, the problem that the network equipment cannot be started due to insufficient large page distribution can be avoided, and the resource waste caused by excessive large page distribution is avoided to a certain extent.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (4)

1. A method for large page allocation, applied to a network device, the method comprising:
when the network equipment is started, acquiring the maximum number of concurrent connections authorized to be used by the network equipment, wherein the maximum number of concurrent connections authorized to be used is the maximum number of concurrent connections set in a license of the network equipment by a manufacturer of the network equipment according to the current payment condition of a user;
acquiring the size of a large page in the network equipment;
obtaining the size of each session in the network equipment;
determining the size of a reserved memory in the network equipment;
determining a hole factor according to the maximum number of the concurrent connections authorized to be used;
determining the number of large pages required by a system of the network device according to the maximum number of concurrent connections authorized to be used, the size of the large pages in the network device, the size of each session in the network device, the size of the reserved memory, and the hole factor;
the determined number of large pages is allocated for the system.
2. The method of claim 1, wherein the step of determining the number of large pages required by the system of the network device based on the maximum number of concurrent connections authorized for use, the size of large pages in the network device, the size of each session in the network device, the size of the reserved memory, and the hole factor is performed by:
Figure FDA0002299579860000011
wherein HnumThe number of large pages required by the system of the network equipment, n is the maximum number of the concurrent connections authorized to be used, Sess is the size of each session in the network equipment, x is the size of the reserved memory in the network equipment, α is the hole factor, HsizeIs the size of a large page in the network device.
3. An apparatus for large page allocation, applied to a network device, the apparatus comprising:
a maximum concurrent connection number obtaining module, configured to obtain a maximum concurrent connection number authorized to be used by the network device when the network device is started, where the maximum concurrent connection number authorized to be used is a maximum concurrent connection number set in a license of the network device by a manufacturer of the network device according to a current payment condition of a user;
a large page size obtaining module, configured to obtain a size of a large page in the network device;
a session size obtaining module, configured to obtain a size of each session in the network device;
a reserved memory determining module, configured to determine the size of a reserved memory in the network device;
a cavity factor determining module, configured to determine a cavity factor according to the maximum number of concurrent connections authorized to be used;
a large page number determining module, configured to determine, according to the maximum number of concurrent connections authorized to be used, a number of large pages required by the network device system;
an allocation module for allocating the determined number of large pages to the system,
wherein the large page count determination module comprises:
a third large page number determining submodule, configured to determine, according to the maximum number of concurrent connections authorized to be used, the size of a large page in the network device, the size of each session in the network device, the size of the reserved memory, and the hole factor, the number of large pages required by the system of the network device.
4. The apparatus of claim 3, wherein the third large page count determination submodule determines the number of large pages required by the system of network devices by:
Figure FDA0002299579860000021
wherein HnumThe number of large pages required by the system of the network equipment, n is the maximum number of the concurrent connections authorized to be used, Sess is the size of each session in the network equipment, x is the size of the reserved memory in the network equipment, α is the hole factor, HsizeIs the size of a large page in the network device.
CN201610889747.6A2016-10-112016-10-11Method and apparatus for large page allocationActiveCN106649148B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610889747.6ACN106649148B (en)2016-10-112016-10-11Method and apparatus for large page allocation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610889747.6ACN106649148B (en)2016-10-112016-10-11Method and apparatus for large page allocation

Publications (2)

Publication NumberPublication Date
CN106649148A CN106649148A (en)2017-05-10
CN106649148Btrue CN106649148B (en)2020-04-17

Family

ID=58855881

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610889747.6AActiveCN106649148B (en)2016-10-112016-10-11Method and apparatus for large page allocation

Country Status (1)

CountryLink
CN (1)CN106649148B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114443280B (en)*2021-12-272025-02-11天翼云科技有限公司 Memory resource management method, device, computer equipment and medium for cloud firewall

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7379069B2 (en)*2001-02-152008-05-27Sony CorporationCheckerboard buffer using two-dimensional buffer pages
CN101763347A (en)*2008-12-242010-06-30中国移动通信集团河北有限公司GIS (Geographical Information System) interface platform as well as network GIS management system and management method
CN102053916A (en)*2010-12-172011-05-11天津曙光计算机产业有限公司Method for distributing large continuous memory of kernel
CN105893269A (en)*2016-03-312016-08-24武汉虹信技术服务有限责任公司Memory management method used in Linux system
CN105988876A (en)*2015-03-272016-10-05杭州迪普科技有限公司Memory allocation method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7379069B2 (en)*2001-02-152008-05-27Sony CorporationCheckerboard buffer using two-dimensional buffer pages
CN101763347A (en)*2008-12-242010-06-30中国移动通信集团河北有限公司GIS (Geographical Information System) interface platform as well as network GIS management system and management method
CN102053916A (en)*2010-12-172011-05-11天津曙光计算机产业有限公司Method for distributing large continuous memory of kernel
CN105988876A (en)*2015-03-272016-10-05杭州迪普科技有限公司Memory allocation method and apparatus
CN105893269A (en)*2016-03-312016-08-24武汉虹信技术服务有限责任公司Memory management method used in Linux system

Also Published As

Publication numberPublication date
CN106649148A (en)2017-05-10

Similar Documents

PublicationPublication DateTitle
CN109561425B (en)Cloud SIM card management server, binding device, management method, binding method and system
CN113300877B (en)Network slice management method and equipment
US11301303B2 (en)Resource pool processing to determine to create new virtual resource pools and storage devices based on currebt pools and devices not meeting SLA requirements
EP3693854A1 (en)Data packet forwarding method, network adapter, host device, and computer system
CN108933829A (en)A kind of load-balancing method and device
CN110633900A (en)Virtual resource allocation method and device and electronic equipment
CN105791254A (en)Network request processing method, device and terminal
CN105760230A (en)Method and device for automatically adjusting operation of cloud host
CN112631766B (en)Dynamic adjustment method and device for project environment resources
CN111343240B (en)Service request processing method and device, electronic equipment and storage medium
WO2019047939A1 (en)Load balancing method and apparatus and service orchestrator
CN110489356B (en)Information processing method, information processing device, electronic equipment and storage medium
CN103889000B (en)A kind of method for channel allocation and device
EP3518166A1 (en)Mobile terminal-based payment method and mobile terminal
CN107239347B (en)Equipment resource allocation method and device in virtual scene
CN106649148B (en)Method and apparatus for large page allocation
CN106919450B (en)Resource adjusting method and device
CN106339332B (en)Information processing method and device and terminal
CN104301944A (en) Resource capacity allocation method and equipment
CN104954493B (en)A kind of method, proxy server and system accessing game server
CN109711193B (en)Storage space sharing method and device
CN108279968B (en) Method and device for scheduling virtual machine resources
CN112218334A (en) Dynamic optimization method, device and computing device for core network load
CN112019358A (en)Network configuration method, device, equipment and system
CN114168094B (en)Miracast message processing method, device and equipment based on inheritable state machine

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp