Background
SERVERLESS the computation, commonly referred to as FaaS (Function-as-a-Service), quickly changes the cloud computing architecture so that developers can offload infrastructure management, focusing only on code functions. The developer can decouple the monomer applications into fine-grained functions and call them in the sequestered sandbox. Current SERVERLESS systems focus primarily on lateral expansion (i.e., "lateral expansion" or "scale-out") while utilizing secure containers to dynamically allocate resources. In this case, each container occupies its dedicated lightweight virtual machine (MicroVM), following a single container single virtual machine (SCPV) model. This arrangement requires each microVM to run its own Guest operating system, which significantly increases the memory overhead of the guetosos, especially if the number of SERVERLESS containers on each node increases to thousands.
For this purpose, microVM templates are introduced by cloud vendors to minimize memory occupation between Guets instances, such as RunD, a lightweight security container. The template technique enables the Guest kernel to load memory on demand, allowing code segments and read-only data segments to be shared among multiple microVM instances. If the kernel file in the template is not accessed, the kernel file does not occupy the physical memory, so that the memory occupation of the instance is reduced. RunD use the above technique, which results in a significant reduction in memory footprint of the new microVM created when the secure container is expanded laterally.
In addition, there is a significant skew in the call pattern of the application. The top 18.6% of the applications account for over 99.6% of SERVERLESS platform calls, and each function in these popular applications creates multiple microVM. For example, upon replaying a function with a function ID bba3cc in Azure Trace, it is observed that this single function triggers the creation of more than 50 microVM. If SERVERLESS runtime is capable of supporting coexistence of multiple containers from the same function in one virtual machine created by templating, then these containers can more efficiently share the memory footprint of the Guest environment. This execution model, referred to as a multi-container single virtual Machine (MCPV) model, has significant potential in improving resource efficiency at high concurrency startup and high density deployment in the SERVERLESS environment.
When implementing MCPV models to accommodate multiple containers of the same function in one microVM, one intuitive approach is to allow for creating containers with static capacity within one fixed size microVM. This static capacity may be determined by the resource specification provided by the developer, or by analysis performed by the cloud provider. However, this static MCPV model has the significant disadvantage that the number of containers within microVM is easily oversubscribed, resulting in the waste of resource fragments.
The above limitations motivate one skilled in the art to consider a dynamic MCPV model. In this model microVM can adjust its memory size and number of containers according to fluctuating workload. When the function requests sparseness, microVM can minimize its container size to 1, thereby avoiding memory fragmentation caused by pre-allocation of memory resources. In contrast, for added function requests microVM can extend memory and CPU resources, adding more containers as needed to amortize Guest OS memory overhead among multiple containers. When the load decreases, the container process exits after a timeout microVM may reduce the size and free up memory resources. Thus, the dynamic MCPV model decouples the container resources from the lifecycle of microVM, thereby eliminating the problems of memory fragmentation and waste.
Content of the application
In view of the above-mentioned drawbacks of the prior art, an object of the present application is to provide a memory hot plug control method and an electronic device for a server without a perceived security container, which are used for solving the above-mentioned drawbacks of microVM memory adjustment.
To achieve the above and other related objects, the present application provides a method for controlling hot plug of memory of a server unaware secure container, which includes allocating memory from a movable area based on CZone when a memory allocation request initiated by a container process is received, and performing container longitudinal expansion based on RunD-V call CZone when a function container allocation request is received.
In one embodiment of the present application, the allocating memory from the movable region based on CZone comprises configuring CZone as an agent for the movable region, when a memory allocation request initiated by a container process is received, the memory allocation request is proxied to CZone, detecting whether the CZone satisfies the memory allocation request, and when the CZone fails to satisfy the memory allocation request, allocating memory from the movable region.
In one embodiment of the present application, the memory allocation from the movable region includes configuring CZone ID fields in a task structure of Linux and configuring CZone ID fields based on a correlation with CZone, and allocating memory from the movable region based on the CZone ID field values when a memory allocation request initiated by a container process is received.
In one embodiment of the application, the CZone ID field is inherited by a sub-process of the container process, which is created by encapsulation system czone _fork ().
In one embodiment of the application, the container longitudinal expansion based on the RunD-V call CZone comprises the steps of triggering the longitudinal expansion by RunD-V through the call CZone when all containers in microVM are detected to be busy, generating an initial container to be hot-plugged into a memory area required by the container, creating a subprocess for the initial container, forming CZone ID fields by CZone and recording container IDs and memory domain IDs in CZone, calling cgroup by a function instance management agent to limit the use of a CPU and a memory, attaching cgroup to the container process, attaching RunD-V to a lightweight cgroup of microVM and adding resource limitation according to the specification of the container, and notifying the agent in microVM to add the initial container to a dispatching group by RunD-V in a runtime to complete the longitudinal expansion.
In one embodiment of the present application, the method further comprises RunD-V configuring a CZone flag and an initial state of the CZone flag in a Guest of each microVM, allowing CZone to perform container longitudinal expansion when microVM detects that the initial state of the CZone flag is a preset state, and changing the state of the CZone flag when CZone performs container longitudinal expansion.
In one embodiment of the application, when RunD-V runtime detects that the number of times the expansion request is denied reaches a preset value, a new microVM instance is triggered to be created, and the lateral expansion of RunD-V is performed.
In one embodiment of the application, when a container instance idle time arrival time threshold is detected, either vertical or horizontal resources are reclaimed.
To achieve the above and other related objects, the present application also provides a computer storage medium storing program instructions which when executed implement the steps of the memory hot plug control method of the server unaware secure container as described above.
To achieve the above and other related objects, the present application also provides an electronic device, including a memory for storing a computer program, and a processor for running the computer program to implement the steps of the memory hot plug control method of the server non-aware secure container as described above.
As described above, the memory hot plug control method and electronic device for server non-perception security container of the present application have the following
The beneficial effects are that:
The application provides an effective memory hot plug technology, which minimizes performance degradation during dynamic memory allocation, a plurality of copy containers for realizing functions can reduce additional memory overhead of a VM (virtual machine) through longitudinal capacity expansion in a single VM, and the technology support can be provided for the application of server non-aware computing to high-density deployment and high concurrency starting in acceleration operation, a server non-aware computing system with commercial significance based on a hybrid resource expansion technology can be constructed, and the efficient expansion service of the containers is provided for cloud service providers.
Detailed Description
The following specific embodiments are described in order to provide additional advantages and benefits, and those skilled in the art may readily appreciate from the disclosure of the present invention. The present embodiment may be further implemented or applied in different specific embodiments, and various modifications and changes may be made in the details of the present description based on different points of view and applications without departing from the spirit of the present embodiment. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
Efficient memory hot plug of microVM is crucial, and can achieve on-demand allocation of microVM pages. Virtio-balloon and Virtio-mem are two widely used open source paravirtualized memory devices designed to release Guest physical memory. The Guest operating system typically avoids fetching all memory initially, but allocates memory on demand when in use, also known as on demand paging. Thus, there are actually two types of free memory in the Guest, one that is never used by the container and does not occupy any physical RAM (non-dirty pages), and the other that is marked as free after being released by the Guest container, while it has been touched, but still occupies host physical RAM space (dirty pages).
However, virtio-balloon and virtio-mem fail to consider the on-demand paging approach of microVM page allocation and lack the ability to distinguish unallocated pages from unsupported pages. During the Guest memory offline process, the system may mark some online pages that were never used by Guest as offline, without prioritizing those pages that were previously marked as used by Guest. Therefore, both techniques fail to prioritize unallocated pages and may not be able to efficiently return enough memory to the host, which is particularly apparent when microVM is abbreviated. This problem can be illustrated by figure 1. In fig. 1, the white boxes indicate that Guest memory has never been used, the gray boxes (units 1,2, 6 and 7) indicate that Guest memory has been used, but the blue boxes (units 3 and 4) indicate that memory blocks are currently being used as the container reclamation is now in an idle state. In this case units 1,2,3, 4, 6 and 7 now occupy host memory. When guests try to hot-pull half of the memory page to return it to the host, virtio-balloon and virtio-mem might take the white boxes (5 and 8) as candidates for down-line as well, because these blocks are not used, indistinguishable from the grey boxes from guests' point of view. Therefore, when they choose to unplug the white box, the Guest considers that a certain amount of memory has been released, but the memory actually reclaimed by the host side does not correspond, resulting in inconsistent reclamation effects between the host and Guest.
In the dynamic MCPV model, using virtio memory offloading results in a large number of unallocated pages not being returned to the host, thus continuously taking up memory space. This constant occupancy reduces the memory available to the host for resource provisioning, thereby reducing memory utilization.
ACPI (Advanced Configuration and Power Interface) is an open industry specification commonly developed by some software and hardware companies in the industry. The system can enable software, hardware, an Operating System (OS), a mainboard and peripheral equipment to manage electricity consumption conditions according to a certain mode, and enables the operating system to directly control the Plug-and-play equipment from the angle of a user, which is different from the traditional management directly through a mode based on a BIOS. ACPI memory hot plug is also a basic method of dynamically adding or removing virtual machine memory, as a method of simulating memory device hot plug, which can accurately determine which portion of Guest memory should be used or taken off line by using the correct memory device ID.
However, linux kernels lack the ability to further partition memory according to the container process, and cannot prioritize where page allocation should occur. In the dynamic MCPV model, containers are frequently created and destroyed in microVM, which will result in a hybrid memory layout. For example, assume that three containers A, B and C are activated simultaneously in microVM, as shown in fig. 2. In the first phase, three containers and their processes are created and begin consuming memory at time t 0. This simultaneous activation causes these containers to concurrently perform memory allocation. Each container is allocated 3 memory blocks to meet its specification, a total of 9 memory blocks are allocated to guests. Thus, all container processes get the required memory and all memory devices are written dirty and partially occupied by the three container processes. Since the buddy allocation lacks information about ownership of these memory devices, it satisfies these memory allocation requests indiscriminately, resulting in all memory devices being interleaved.
The allocation of pages in such a hybrid memory layout inevitably results in migration of a large number of pages between memory devices, especially when the memory devices are removed, resulting in serious performance degradation. As shown in FIG. 2, when container B is reclaimed and memory blocks are freed in the second phase, microVM begins to return memory device B to the host at time t 6. At this time, the two memory blocks are respectively from the container a and the container B, and coexist in the memory device B. If microVM were to remove memory device B, a problem with memory migration would occur. This process requires that these two memory blocks be migrated into free space of other memory devices, which consumes memory bandwidth and significantly affects the performance of container a and container C.
If ACPI is employed to resize memory in the dynamic MCPV model, memory bandwidth may become an important bottleneck. This can greatly increase the load on the host and negatively impact the runtime performance of the memory intensive function.
In order to solve the above-mentioned shortcomings of microVM memory adjustment, the present application provides a memory hot plug control method and an electronic device for a server non-aware security container.
The application provides an ideal memory hot plug design suitable for a safe container in SERVERLESS computing, which comprises the following steps:
1) All used memory pages of the container process are located in a centralized removable memory region, ensuring that all unallocated pages of the current container are also contained in that region. When the hot plug operation is executed, the region captures all unallocated pages of the container, so that the host machine can conveniently recover the memory.
2) Pages in the memory region are exclusively allocated to a single container process, and unallocated pages associated with different container processes are similarly isolated. Such proprietary mapping prevents memory migration during hot plug operations and ensures that the memory region has the same lifecycle as the container assigned to it.
For this reason, we propose CZone, a special memory hot plug design that supports the dynamic MCPV model in the Guest operating system. Meanwhile, a safe container memory hybrid expansion system RunD-V which simultaneously provides transverse resource expansion and longitudinal resource expansion is designed based on CZone.
The principle and implementation of the memory hot plug control method and the electronic device of the server non-aware secure container of the present embodiment will be described in detail below, so that those skilled in the art can understand the memory hot plug control method and the electronic device of the server non-aware secure container of the present embodiment without creative labor.
The application provides a memory hot plug control method of a server non-perception security container, and fig. 3 is a schematic overall flow chart of the memory hot plug control method of the server non-perception security container in an embodiment of the application, and as shown in fig. 3, the method comprises the following steps:
step S100, when a memory allocation request initiated by a container process is received, memory is allocated from a movable area based on CZone;
Step S200, when a function container allocation request is received, container longitudinal expansion is performed based on RunD-V call CZone.
The application comprises a CZone-based memory structure in a Guest kernel and a CZone-based secure container memory hybrid expansion system RunD-V. The application aims at designing a secure container memory hot plug technology and system software aiming at a server non-perception computing scene.
The following describes the above step S100 and step S200 in the memory hot plug control method of the server non-aware secure container according to the present embodiment in detail.
In step S100, when a memory allocation request initiated by the container process is received, memory is allocated from the movable area based on CZone.
To achieve efficient memory allocation, linux organizes memory into several regions (zones). FIG. 4 is a schematic diagram showing the structure of CZone in the kernel of the memory hot plug control method without the perceived security container of the server according to one embodiment of the present application, and as shown in FIG. 4, these regions are arranged in sequence, and the regions with lower indexes are more general and multipurpose. When allocating memory, the operating system allocates an area index for each request indicating the preferred area where the desired memory should reside. If the current region fails to satisfy the request, linux will explore the region with the lower index to satisfy the demand. By dividing the memory into different regions and preferentially distributing the memory according to the region index, linux can optimize the memory use, ensure key system functions, and particularly relates to hardware interaction operation, and can access the most suitable memory region. This approach improves the performance and responsiveness of the system in various computing environments.
CZone follow the same principle. For this purpose CZone extends the design of the Linux removable area (Movable Zone). Movable Zone is a specialized memory area optimized for memory compression, defragmentation, and migration tasks. The method promotes efficient memory management by realizing dynamic repositioning and exchange of the memory, thereby improving the system performance and the resource utilization rate. The page guarantee in Movable Zone is removable, unlike the Normal Zone where there is a kernel allocation. Although Movable Zone has relatively few uses, its index is higher than Normal Zone, it is still able to fulfill the conventional memory allocation requests from the user space process.
Fig. 5 is a flowchart illustrating a memory hot plug control method for a server-less secure container according to an embodiment of the application based on CZone for allocating memory from a movable area, and fig. 5 shows that in an embodiment of the application, the allocating memory from the movable area based on CZone includes:
Step S110, configuring CZone as a proxy of the movable area;
step S120, when a memory allocation request initiated by a container process is received, the memory allocation request is proxied to CZone;
Step S130, detecting whether the CZone meets the memory allocation request, and allocating memory from the movable area when the CZone fails to meet the memory allocation request.
CZone can be seen as adaptation Movable Zone with higher region indices. The plurality CZone is organized into an array as shown in figure 1. Similar to Movable Zone, CZone is precompiled with the kernel and memory is dynamically allocated and released. CZone acts as a proxy for Movable Zone. Assuming that a container process is assigned to one CZone, when it requests memory from Movable Zone, the request is first proxied to CZone. If CZone fails to satisfy the memory request, the system will attempt to allocate memory directly from Movable Zone without consideration of the other CZone.
When a user space program intends to generate a new container process, a fork () system call is typically triggered to the kernel. When called, fork () replicates the properties of the calling process, creating an identical copy called a sub-process. While the parent and child processes remain separate memory spaces, they initially share the same memory content due to the on-demand paging mechanism. Thus, in order to limit the memory of a particular process to a specified CZone, it is necessary to redirect all memory allocation requests for that process to the corresponding CZone.
In one embodiment of the present application, the memory allocation from the movable region includes configuring CZone ID fields in a task structure of Linux and configuring CZone ID fields based on a correlation with CZone, and allocating memory from the movable region based on the CZone ID field values when a memory allocation request initiated by a container process is received.
Wherein the CZone ID field is inherited by a sub-process of the container process, which is created by encapsulation system czone _fork ().
In this embodiment, a new integer field CZone ID (CID) is introduced into the task structure of Linux. For standard processes unrelated to CZone, this field is set to zero by default. However, for processes restricted to a particular CZone, this field is set to the ID of the corresponding CZone. By default, this field is inherited by the child process, ensuring that the process cannot go beyond the boundary of its target CZone by creating a child process. A developer may use a simple encapsulation system czone _fork () to create a new process that is restricted to a particular CZone. Upon invocation czone _fork () will set the CZone field in the sub-process' task structure to CZone ID provided, effectively forcing it to be constrained within the specified CZone.
When a process initiates a memory allocation request, the buddy allocation checks CZone ID stored in the process task structure. For normal processes that are not restricted to CZone, this field is set to zero and the buddy allocation follows its standard procedure. However, for processes restricted to a particular CZone, the CZone ID field contains a non-zero value. In this case, the buddy allocation first routes the movable region memory allocation request to the corresponding CZone. If no pages are available in the target CZone, the buddy allocator may skip the other CZone and check the movable area directly. Thus, the buddy locator ensures that processes with non-zero CZone ID preferentially use the pages of target CZone.
Step S200, when a function container allocation request is received, container longitudinal expansion is performed based on RunD-V call CZone.
RunD is the current security container transverse expansion runtime system which is the best in academia, and when a longitudinal expansion mechanism based on CZone is introduced on the basis of the transverse expansion mechanism, a global strategy must be designed to maximize the advantages of the two expansion modes. One simple approach is a "vertical priority" policy, i.e., a single virtual machine is filled up before extending laterally to the next virtual machine. When the fluctuation of the function request does not need frequent container expansion, the method can fully utilize the low memory overhead characteristic of longitudinal expansion and maximize the deployment density.
In one embodiment of the application, the container longitudinal expansion based on the RunD-V call CZone comprises the steps of triggering the longitudinal expansion by RunD-V through the call CZone when all containers in microVM are detected to be busy, generating an initial container to be hot-plugged into a memory area required by the container, creating a subprocess for the initial container, forming CZone ID fields by CZone and recording container IDs and memory domain IDs in CZone, calling cgroup by a function instance management agent to limit the use of a CPU and a memory, attaching cgroup to the container process, attaching RunD-V to a lightweight cgroup of microVM and adding resource limitation according to the specification of the container, and notifying the agent in microVM to add the initial container to a dispatching group by RunD-V in a runtime to complete the longitudinal expansion.
When multiple container copies of functions coexist in the same RunD-V sandbox through the CZone mechanism, managing resource contention among container processes presents a significant challenge. In addition to memory, other resources such as CPU and network also need to be hot swapped and isolated as necessary. Thus, not only is a resource constraint built on the lightweight cgroup of the host required, guest cgroup is also utilized to manage all of the resources of the multiple container processes in microVM. The specific steps for memory and other resource longitudinal expansion in RunD-V are shown in figure 6.
First, runD-V runtime routes the request to the function instance management agent. When all of the containers in microVM are detected as busy, runD-V triggers a longitudinal expansion by calling CZone to hot-plug the required memory area of the container (step ①). Next CZone subjects the initial container to fork and adds the Container ID (CID) and memory area ID (RID) pair of fork to the CID-RID map. The memory area is then bound to the container ID and recorded CZone while the container remains suspended in the background (step ②). Third, the proxy calls Guest cgroup to restrict CPU and memory usage and appends these restrictions to the container process (step ③). This separation ensures that resource management within the Guest can be handled independently without placing an additional burden on the host cgroup. At the same time RunD-V locates the lightweight cgroup attached to microVM and increases the resource limit according to the container specifications (step ④). Finally, runD-V runtime informs the agent in microVM to add the new container to its dispatch group, completing the vertical expansion (steps ⑤ and ⑥).
However, when a high concurrency request requires a large scale of extended function containers, using only the longitudinal extension method is not optimal. First, in Linux memory management, the use of mutex locks can cause significant degradation of concurrent hot-plug in within a virtual machine. Second, the number of longitudinal expansion units that a single virtual machine can support is limited (e.g., 8). When the concurrency requirement exceeds this limit, the system will fall back to a higher overhead lateral expansion mode. There is thus a trade-off here of the higher density provided by the longitudinal expansion, but at the expense of some concurrency capability.
Based on the above analysis, runD-V's longitudinal expansion principle should follow the following criteria:
When the container expansion frequency is low, the longitudinal expansion based on CZone is mainly used to maximize the deployment density of the function.
When the container expansion frequency is high, it is indicated that the horizontal expansion capability needs to be utilized to perform the vertical expansion in a plurality of expanded virtual machines.
In one embodiment of the application, when RunD-V runtime detects that the number of times the expansion request is denied reaches a preset value, a new microVM instance is triggered to be created, and the lateral expansion of RunD-V is performed.
Namely, the embodiment provides a hybrid expansion method combining a longitudinal expansion mechanism and a transverse expansion mechanism so as to realize high-density deployment and high-concurrency starting. RunD-V maintains a CZone flag in the Guest of each microVM, with the initial state being False. This request is only allowed to trigger a longitudinal extension if microVM finds that the CZone flag is False. When microVM is extended longitudinally by CZone, the flag is set to True. The subsequent concurrent request detects this flag, if it is found to be True, which means that the longitudinal expansion is being performed at this time microVM, and the concurrent longitudinal expansion performed on MicroVM will cause a reduction in expansion performance due to the mutex lock of the Linux kernel, so that further longitudinal expansion of the virtual machine is abandoned.
RunD-V runtime finds that the next time this expansion request is denied, triggers the creation of a new microVM instance, i.e., a lateral expansion of RunD-V, that implements parallel resource expansion with the old microVM instance. In this way we achieve overlapping of lateral and longitudinal expansion, balancing between high density deployment and high concurrency launch capability.
In one embodiment of the application, when a container instance idle time arrival time threshold is detected, either vertical or horizontal resources are reclaimed.
Fig. 7 is a schematic diagram illustrating an implementation process of a memory hot plug control method of a server non-aware secure container according to an embodiment of the application, wherein as shown in fig. 7, logic for implementing the memory hot plug control method of the server non-aware secure container is as follows:
1) The request router is a function management module function belonging to RunD-V and used for receiving different function requests and routing the different function requests to an instance manager of the function.
2) Function instance manager-function management module function belonging to RunD-V, which is used for managing all container instances created by a certain function and managing request reuse and life cycle of all containers. It may request allocation of an available container for a function, trigger the resource extension module, and create a new container when no free available container is found. When the idle time of a certain container instance managed by the resource management module reaches a certain time, the resource management module is triggered to carry out longitudinal or transverse resource recovery.
3) The resource expansion manager belongs to RunD-V resource expansion module functions. The module may receive a request from the function management module for a horizontal or vertical scaling of the resource requested by the function, corresponding to triggering the creation of a new microVM instance or the creation of a new container based on CZone.
4) And (3) transverse resource expansion, namely one of the functions of the resource expansion module belonging to RunD-V. The module performs lateral expansion based on the existing SERVERLESS resource expansion logic, integrates CZone kernel functions, and creates a new microVM instance available for longitudinal expansion.
5) And (5) longitudinal resource expansion, namely one of the functions of the resource expansion module belonging to RunD-V. The present module dynamically creates multiple containers within microVM based on CZone kernel functions, each container isolated using a respective memory domain, all together assuming the same Guest OS overhead.
6) And (3) resource recovery, namely one of the functions of the resource expansion module belonging to RunD-V. The module correspondingly triggers transverse resource recovery (MicroVM deletion) and longitudinal resource recovery (microVM container deletion and memory device hot-drawing) according to the resource recovery request sent by the function management module.
The application, when practiced, comprises:
1) User interaction with the software system layer:
a) Because the function management module and the resource expansion module are transparent to the user, the user is not required to know any hardware information or participate in any memory process, and therefore the user is not required to participate in the part.
B) The software system layer observes and records the access behaviors of the user, and builds a memory domain according to the memory device information and the function container process.
2) Interaction of software system layer with Guest kernel:
a) The resource extension module may specify microVM the offloaded container ID, directing the Guest kernel implemented based on CZone to hot-plug the memory.
As shown in fig. 8, the protection scope of the memory hot plug control method for the server non-aware secure container according to the embodiment of the present application is not limited to the execution sequence of the steps listed in the embodiment, and all the schemes implemented by adding or removing steps and replacing steps according to the prior art according to the principles of the present application are included in the protection scope of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the memory hot plug control method of the server non-perception security container provided by any embodiment of the application.
Any combination of one or more storage media may be employed in embodiments of the present application. The storage medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The embodiment of the application also provides electronic equipment. Fig. 8 is a schematic structural diagram of an electronic device 100 according to an embodiment of the application. In some embodiments, the electronic device may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an Ultra-Mobile Personal Computer (UMPC), a netbook, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), or the like. In addition, the memory hot plug control method of the server non-perception security container provided by the application can be applied to databases, servers and service response systems based on terminal artificial intelligence. The embodiment of the application does not limit the specific application scene of the memory hot plug control method of the server non-perception security container.
As shown in fig. 8, an electronic device 100 provided in an embodiment of the present application includes a memory 101 and a processor 102.
The memory 101 is used for storing a computer program, and the memory 101 preferably includes a ROM, a RAM, a magnetic disk, a U-disk, a memory card, or a disk, etc. various media capable of storing program codes.
In particular, memory 101 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) and/or cache memory. Electronic device 100 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. Memory 101 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the application.
The processor 102 is connected to the memory 101, and is configured to execute a computer program stored in the memory 101, so that the electronic device 100 executes the memory hot plug control method of the server non-aware security container provided in any embodiment of the present application.
Alternatively, the Processor 102 may be a general-purpose Processor including a central processing unit (Central Processing Unit, CPU), a network Processor (Network Processor, NP), etc., a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application Specific Integrated Circuit (ASIC), a field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Optionally, the electronic device 100 in this embodiment may further include a display 103. The display 103 is communicatively connected to the memory 101 and the processor 102 for displaying a GUI interactive interface associated with a server-agnostic secure container memory hot plug control method.
In summary, the present application provides an effective memory hot plug technology, which minimizes performance degradation during dynamic memory allocation, and enables multiple copy containers of functions to reduce additional memory overhead of VMs through longitudinal capacity expansion in a single VM, so that starting is accelerated during running, which can provide technical support for application of server non-aware computing to high-density deployment and high-concurrency starting, and can build a commercially significant server non-aware computing system based on hybrid resource expansion technology, and provide efficient container expansion services for cloud service providers. Therefore, the embodiment effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present embodiments and their efficacy, but are not limited to the embodiments. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present embodiments. Therefore, it is intended that all equivalent modifications and changes which a person skilled in the art will accomplish without departing from the spirit and technical spirit of the present embodiment shall be covered by the claims of the present embodiment.