Disclosure of Invention
In view of the above, the present disclosure provides a method and system for deploying a specific application based on a host operating system, so as to solve the above problem.
To achieve this object, according to a first aspect of the present disclosure, an embodiment of the present disclosure provides a method for deploying a specified application based on a host operating system, including:
constructing a single application space, and loading codes related to the specified application into the single application space; constructing an operating system of a single application and associating the operating system with the single application space;
and switching into an operating system of a single application, and executing codes related to the specified application in the operating system to start a corresponding process, wherein the corresponding process is used for completing the functions of the specified application and checking a resource access request from the specified application.
Optionally, the corresponding process includes an application process and a proxy process, the application process is configured to complete a function of the specified application, and the proxy process is configured to provide a resource access control policy and check a resource access request from the application process based on the resource access control policy.
Optionally, the single application space is partitioned into a high privilege level space and a low privilege level space, the application process can only access the low privilege level space, and the broker process can access the high privilege level space and the low privilege level space.
Optionally, from the application process perspective, the low privilege level space has a first address of 0 and the high privilege level space is not visible.
Optionally, the method further comprises: and deploying and starting a shadow process based on the host operating system, wherein the shadow process establishes address mapping data consistent with the proxy process in the view angle based on the resource access request.
Optionally, the application process sends the resource access request to the proxy process, the proxy process sends the resource access request to the shadow process through the single-application operating system and the host operating system, the shadow process checks the resource access request and sends the resource access request to the host operating system, and the host operating system applies for the resource accordingly and sends the resource handle to the shadow process, so that the shadow process records address mapping data.
Optionally, the operating system of the single application is limited to a virtual machine that runs only processes related to the specified application.
Optionally, the application process calls a system function for the file provided by the host operating system to complete the read-write operation for the file.
Optionally, the application process calls a virtual storage access interface provided by the proxy process, and the proxy process completes read-write operation on the file through a virtualization technology.
Optionally, the host operating system is Linux.
According to a second aspect of the present disclosure, an embodiment of the present disclosure provides a system, including:
a host operating system;
a single-application operating system;
processes executing in the single-application operating system that are related to a given application,
the host machine operating system builds a single application space when being started, loads codes related to the specified application into the single application space, builds an operating system of the single application related to the single application space, and then switches into the operating system of the single application to execute the codes related to the specified application so as to start processes related to the specified application.
Optionally, the operating system of the single application is limited to a virtual machine that only runs the process related to the specified application, and the host operating system is Linux.
According to a third aspect of the present disclosure, an embodiment of the present disclosure provides a server, including a memory and a processor, the memory further storing computer instructions executable by the processor, the computer instructions, when executed, implementing any of the above methods.
According to a fourth aspect of the present disclosure, embodiments of the present disclosure provide a computer readable medium storing computer instructions executable by an electronic device, the computer instructions, when executed, implementing the above-mentioned method.
According to a fifth aspect of the present disclosure, an embodiment of the present disclosure provides a processor, where the processor includes a plurality of processor cores, each processor core is in a kernel state or a user state independently of other processor cores, each processor core executes a host operating system in the kernel state and allocates a single application space for a specified application, any processor core is selected as a processor core executed by the single application space, and a code related to the specified application is stored in the single application space, and the host operating system switches to the user state on the selected processor core and then enters the single application space to execute the code therein, so as to start an operating system of the single application and a process related to the specified application.
The embodiment of the disclosure separates the access policy control for computer resources from the host operating system, and realizes the access policy control and the access policy control in an independent agent process, thereby logically distinguishing the access control from the resource management. The application agent can be deployed on the host operating system, a single-application operating system can be established on the host operating system, if the application is deployed in the single-application operating system, the specified application is migrated to other single-application operating systems, hot migration can be achieved, and migration efficiency is higher.
The access policy control of a plurality of applications for computer resources can be distributed in different agent processes, and all agent processes are not affected due to the fact that a certain agent process is attacked, so that the security is better compared with a container technology.
The operating system of a single application is generally a lightweight operating system, and may be, for example, a lightweight virtual machine, and specifically may be, although constructed by KVM of Linux, reduced in functionality and limited to allow only processes related to a specified application to be executed therein.
Detailed Description
The present disclosure is described below based on examples, but the present disclosure is not limited to only these examples. In the following detailed description of the present disclosure, some specific details are set forth in detail. It will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. Well-known methods, procedures, and procedures have not been described in detail so as not to obscure the present disclosure. The figures are not necessarily drawn to scale.
The processor component mentioned in at least one embodiment below may be a processor of a single processor core, may be a processor including a plurality of processor cores, and may further include a combination processor of one or more of a central processing unit, a digital processing unit, a special-purpose processor (e.g., various acceleration units for executing a neural network model), and the like. The particular class of processor components is not material to the invention herein and thus in the embodiments described below the processor components may be treated substantially as a whole.
The processing devices referred to hereinafter may be any of a variety of types of computer systems, including but not limited to desktops, servers, notebooks, and workstations, as well as various embedded products, including but not limited to cellular phones, internet protocol devices, digital cameras, Personal Digital Assistants (PDAs), handheld PCs, network computers (netpcs), set-top boxes, network hubs, Wide Area Network (WAN) switches. The processing device is provided with a control system based on software and hardware implementation, and the control system can be partially integrated in the processing device and partially deployed in the processing device in an installation mode.
For a server, the operating system installed thereon is usually in a multi-user management mode, and includes at least two users: a general-authority user and a highest-authority user. The operating system in the multi-user management mode logically divides the addressing space of the memory into a kernel space and at least one user space in software, for example, in the case of the Linux operating system, the highest 1 gigabyte (from the virtual address 0xC0000000 to 0 xffffffffff) is used as the kernel space, and the lower 3 gigabyte (from the virtual address 0x00000000 to 0 xfffffff) is used as the user space. The kernel space stores code and data of an operating system, and the user space stores code and data of a user program created by a user. The operating mode of a processor can be divided into a kernel mode and a user mode. The processor has more rights in kernel mode than in user mode, for example, the processor working in kernel mode may have access to all data and instructions in kernel space and user space, and may have access to peripheral devices including hard disks, network cards via device drivers. The processor executes in the user mode only data and instructions in its own user space. The processor may switch from user space to kernel space, i.e., from user state to kernel state, via system calls, exceptions, and interrupts of the peripheral device while the user space is operating.
Data center
A data center is a globally collaborative network of devices that is used to communicate, accelerate, present, compute, store data information over an internet network infrastructure. In future development, the data center will become an asset for enterprise competition. In a conventional large data center, the network structure is generally a three-layer structure shown in fig. 1, i.e., a hierarchical interconnection network model (hierarchical inter-networking model). This model contains the following three layers:
access Layer (Access Layer) 103: sometimes referred to as the edge layer, includesaccess switch 130 andservers 140 to which the access switch is connected. Eachserver 140 is a processing and storage entity of a data center in which the processing and storage of large amounts of data is performed by theservers 140.Access switch 130 is a switch used to access these servers to the data center. Oneaccess switch 130 accessesmultiple servers 140. The access switches 130 are typically located on Top of the Rack, so they are also called set-Top (Top of Rack) switches, which physically connect the servers.
Aggregation Layer (Aggregation Layer) 102: sometimes referred to as the distribution layer, includes aggregation switches 120. Eachaggregation switch 120 connects multiple access switches while providing other services such as firewalls, intrusion detection, network analysis, and the like.
Core Layer (Core Layer) 101: including core switches 110. Core switches 110 provide high-speed forwarding of packets to and from the data center and connectivity for multiple aggregation layers. The entire data center network is divided into an L3 layer routing network and an L2 layer routing network, and thecore switch 110 provides a flexible L3 layer routing network for the entire data center network.
Typically, theaggregation switch 120 is the demarcation point between L2 and L3 layer routing networks, with L2 below and L3 above theaggregation switch 120. Each group Of aggregation switches manages a Point Of Delivery (POD), within each Of which is a separate VLAN network. Server migration within a POD does not have to modify the IP address and default gateway because one POD corresponds to one L2 broadcast domain.
A Spanning Tree Protocol (STP) is typically used betweenaggregation switch 120 andaccess switch 130. STP makes only oneaggregation layer switch 120 available for a VLAN network and the other aggregation layer switches 120 are used in the event of a failure (dashed lines in the upper figure). That is, at the aggregation level, no horizontal scaling is done, since only one is still working even ifmultiple aggregation switches 120 are added.
FIG. 2 illustrates the physical connections of the components in the hierarchical data center of FIG. 1. As shown in fig. 2, onecore switch 110 connects tomultiple aggregation switches 120, oneaggregation switch 120 connects tomultiple access switches 130, and oneaccess switch 130 accessesmultiple servers 140.Server 140 is the actual computing device of the data center.
Theserver 140 is a processing device that performs computing tasks based on software and hardware cooperation. As shown in fig. 3,server 140 includes storage 201 and processing system 300. Processing system 300 includes astorage controller 301, an I/O controller 303, aprocessor component 304, and a storage device 306 coupled via an interconnect unit 302. In some embodiments, the processing system 300 may be considered a chip package, i.e., a system on a chip.
Thememory controller 301 is coupled to an external memory device 201, and controls read and write operations to the memory device 201 through thememory controller 301. In some embodiments, thememory controller 301 and the memory device 201 are integrated into one physical element. Storage 306 and storage 201 may be implemented based on any of a wide variety of information storage technologies. Generally, storage device 306 has a higher storage efficiency than storage device 201, but storage device 201 has a larger storage capacity than storage device 202. In some embodiments, storage device 306 is, for example, read-only memory (ROM), Random Access Memory (RAM), Dynamic RAM (DRAM), double data rate DRAM (DDRAM), Synchronous DRAM (SDRAM), Static RAM (SRAM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), flash memory, or the like, and storage device 201 is a plurality of ferromagnetic disk drives organized as a redundant array of independent disks or a RAID array. It should be noted that although the storage devices are shown as separate elements in the figures, they may comprise multiple elements based on the same or different storage technologies.
Theprocessor component 304 may include one or more processor cores, which may include, for example, a higher speed, higher complexity high power core and a lower speed, lower complexity low power core. These processor cores have the same or different instruction sets by which they can decode and execute program instructions contained within the user space 3051 or kernel space 3052.
The I/O controller 303 is coupled to the I/O device 202, the I/O device 202 may be, for example, a display, a keyboard, a network, etc. the I/O device 202 may be operated via the I/O controller 303. In some embodiments, the I/O controller 303 is integrated with the I/O device 202 as one physical element.
The storage device 201 is generally used to store code and data for various applications. The storage device 306 is used for storing core code and data, such as code and data of an operating system and code and data of a driver, and providing process running space required for running various programs (including an operating system, a driver and an application).
When the server is powered on and started, the process running space of the storage device 306 is divided into a kernel space 3052 and one or more user spaces 3051 by the operating system. The user space 331 is a runtime space of the user program, and the kernel space 3052 is a runtime space of the operating system and the system program.
In the kernel space, a plurality of entries related to the operation of the operating system, such as a conversion table 3053, a user authority table 3054, a block buffer 3055, an interrupt table 3056, and a page table 3057, are stored. The page table 3057 associates entries for virtual addresses of pages and physical addresses of pages. The block buffer 3055 is used to buffer data in the storage 201 in the form of blocks and can provide page data retrieved in the form of pages. Translation table 3053 associates a virtual address of a page with an identifier of one or more data blocks in storage 201 that include the contents of the page. The user authority table 3054 is a relationship table of a plurality of users and authorities. Meanwhile, the kernel space 3052 provides a running environment for the operating system 340 composed of the mode controller 3401, the storage driver 3402, the I/O driver 3403, and the like. The mode controller 3401, the storage drive 3402, and the I/O drive 3403 may have respective independent operating environments in thekernel space 3053. Each runtime environment may store some code to be executed, some input data, as well as some intermediate operational data and some final result data. Operating system 340 may be a currently commonly available WINDOWSTMOne of an operating system version, a UNIX operating system, a Linux operating system, an Android operating system, and a RealTime OS operating system.
The user space 3051 is used to provide execution environments of various application processes, and the execution environment of each application program may store some code to be executed including the application program, some input data, some intermediate operation results, and some final result data.
Theprocessor component 304 can read the code of the various applications from the user space 3051 and execute one by one. Theprocessor component 304 may read the code of the operating system 340 from the kernel space 3052 and execute it one by one. Here, the processor state in which the code of the kernel space 3052 is executed is referred to as a kernel state, and the processor state in which the code of the user space 3051 is executed is referred to as a user state. Since theprocessor component 304 may include one or more processor cores, when theprocessor component 304 includes multiple processor cores, there are three situations as to what state the processor is in: all processor cores are in a kernel state; all processor cores are in a user state; and a part of the processor cores are in a user state, and the other part of the processor cores are in a kernel state.
The processor core has more rights in kernel mode than in user mode, for example, the processor core may be able to access all data and instructions in kernel space and user space when operating in kernel mode, and may be able to access peripheral devices including hard disks, network cards via device drivers. The processor core executes in a user mode, and can only access data and instructions in the user space of the processor core. However, when the processor core works in the user space, the user space can be switched to the kernel space, namely, the user state is switched to the kernel state through system calls, exceptions and interrupts of peripheral devices. Furthermore, it should be understood that in a multi-USER operating system, a processor core can only access a resource of a corresponding USER in a USER state of a different USER, for example, the processor core is switched to a USER state of USER1, and then only can access a resource to which USER1 has authority.
Although not shown, as an alternative deployment manner, at least a plurality of virtual machines may be deployed in the operating system, and if the virtual machines are also in a multi-user mode, the storage space of the virtual machines may also be divided into a kernel space and a user space, so that the application program is deployed in the user space inside the virtual machine, and the kernel space runs the kernel components of the virtual machine. By way of example, KVM (for kernel-based virtual machines) is a complete virtualization solution for Linux on x86 hardware, including virtualization extensions (Intel VT or AMD-V). It consists of loadable kernel modules kvm.ko and processor specific modules kvm-intel.ko or kvm-amd.ko, which provide the core virtualization infrastructure. Using KVM, a plurality of virtual machines running unmodified Linux or Windows images can be run. Each virtual machine has dedicated virtual hardware: network cards, disks, graphics adapters, and the like.
FIG. 4 is an exemplary block diagram of a single application environment provided by embodiments of the present disclosure. The single-application environment 200 is located above the software kernel layer shown in the figure. Multiple single application environments 200 may be built on top of the software kernel layer. In the figure, a single application environment 200 is an execution environment built for one specific application. The single application environment 200 includes a shadow process 201 and a single application operating system 202. Application process 2021 and proxy process 2022 are executed in operating system 202 of a single application. The application process 2021 is an execution process corresponding to the specified application. The proxy process 2022 is a process that isolates the application process 2021 from the software kernel layer 122 and provides access policies and access controls for the application process 2021 to system resources including I/O, storage, and network. Specifically, the agent process 2022 sets an access policy for a specific application, and after receiving an access request sent from the application process 2021, performs permission check according to the access policy, sends the access request that passes the permission check to the software kernel layer 122, and obtains an audit result from the software kernel layer 122.
In some embodiments, the software kernel layer 122 includes a host operating system and virtual machine software (or only a host operating system). The software kernel layer 122 processes the access request of the application process 2021, for example, if the access request is to request to access a certain IO device, the proxy process 2022 sends the request to the software kernel layer 122 after the permission check is passed, the software kernel layer 122 returns the operation handle of the IO device to the proxy process 2022, the proxy process 2022 returns the operation handle to the application process 2021, for example, if the access request is to apply for a memory space, the proxy process 2022 sends the request to the software kernel layer 122 after the permission check is passed, and the software kernel layer 122 provides the access address of the specific memory space to the application process 2021 and the proxy process 2022 through the memory management system.
As a specific implementation, the application process 2021 may first execute a function call, the call takes the identifier information of the resource to be accessed as a parameter, and the program execution jumps to the proxy process 2022 through the call (corresponding to step 1 on the diagram), the proxy process 2022 executes the permission checking module, the permission checking module judges whether the application process has an access permission to the resource to be accessed based on a predetermined access policy, and if so, the program execution jumps to the host operating system for execution through the system call provided by the host operating system (corresponding to step 2 on the diagram). The host operating system applies for the resource for the application process 2021, and returns the handle and the right for the resource access to the proxy process 2022 (corresponding to step 7 on the figure), and then the proxy process 2022 provides the handle and the right to the application process 2021 (corresponding to step 8 on the figure). I.e. the processing logic of the resource access of which steps 1, 2, 7 and 8 on the figure constitute a closed loop.
As another specific implementation, the agent process 2022 may first execute an agent kernel call, where the call uses the resource identifier information to be accessed as a parameter, and the program execution jumps to the agent process 2022 through the call (corresponding to step 1 on the diagram), and the agent process 2022 executes an authority check module, where the authority check module determines whether the application process has an access authority for the resource to be accessed based on a predetermined access policy, and if so, the program execution jumps to the host operating system for execution through a system call provided by the host operating system (corresponding to step 2 on the diagram). The host operating system sends the request to the shadow process 201 (step 3 on the corresponding graph), then the shadow process 201 processes the access request and sends the request to the host operating system (step 4 on the corresponding graph), the host operating system applies for the resource and provides the resource address to the shadow (step 5 on the corresponding graph), the shadow process 201 records the resource use information and transmits the resource use information to the host operating system (step 6 on the corresponding graph), the host operating system provides the handle and the authority of the resource to the proxy process 2022 (step 7 on the corresponding graph), and then the proxy process 2022 provides the handle and the authority of the resource to the application process 2021. I.e., the processing logic of the resource access of which steps 1-8 on the figure constitute a closed loop. The reason for adding the shadow process is that when the program execution needs to jump out of the proxy process and execute the host operating system, the processor has no way to jump from the single-user operating system in the user mode to the host operating system in the kernel mode, so that the switching needs to be performed through the shadow process, and meanwhile, the existence of the shadow process also enables the host operating system to think that all resource requests come from the shadow process, and the single-application operating system is transparent to the shadow process. Fig. 5 is a schematic block diagram of a process space provided by an embodiment of the present disclosure. A single application space can be viewed as a process space from the operating system perspective of the agent. The global space may be viewed as a process space from the perspective of the host operating system. The shadow space may be viewed as the process space from the shadow process perspective. As shown in the figure, in the view of the host operating system, the process space of the application process 2021 and the process space of the proxy process 2022 are both part of the memory space managed by the host operating system, the host operating system can perform any operation (including reading and writing) on the application process 2021, the shadow space is an address space corresponding to the single application space one to one, and any operation on the shadow space by the shadow process 201 is applied to the single application space.
In some embodiments, the host operating system is responsible for maintaining address mapping data between the global space and the plurality of single application spaces, for example, by maintaining address mapping data between the global space and all of the single application spaces through a global address table, which may include process names, process IDs, and start and stop addresses of the application processes. Each agent process 2022 maintains its own local address table, and each shadow process also maintains its corresponding local address table of the agent process 2022 (the shadow process and the agent process are in one-to-one correspondence). The local address table of agent process 2022 may include a code start-stop address and a data start-stop address. For example, in the global address table, the start address of the single application space of the application a is addr1 and the size is 1M, in the local address table of the application a, the data start address of the designated proxy process is dataAddr1 and the size is 1M, the code start address is codeAddr1 and the size is 1M, the data address of the application process is dataAddr2 and the size is 1M, and the code start address is codeAddr2 and the size is 2M.
In other embodiments, the host operating system has recorded in its global address table address mapping data for all applications, including the start and stop address of data and the start and stop address of code for each application, and each proxy process maintains a copy of the data associated with itself, with each shadow process also maintaining one such copy.
In some embodiments, as shown, the process space of agent process 2022 in the single application space is a high privilege level space, the process space of application process 2021 is a low privilege level space, and the process space of application process 2021 is addressed from 0 address, so that the process space of agent process 2022 does not exist within the perspective of application process 2021, which may prevent application process 2021 from accessing the process space of agent process 2022, thereby achieving isolation between the application process and the agent process. If a Linux system or an unix system is used, the high privilege level space may correspond to a kernel space, and the low privilege level space may correspond to a user space, i.e., the process space of the agent process 2022 belongs to the kernel space, and the process space of the application process 2021 belongs to the user space, and accordingly, the processor executes the code of the agent process 2022 in the kernel mode and the code of the application process 2021 in the user mode.
The embodiment separates the access policy control for computer resources from the software kernel layer, realizes the access policy control in an independent agent process, logically distinguishes the access control and the resource management, and compared with a virtual machine technology, a single-application operating system is lighter in weight and saves more resources. Meanwhile, because the access policy control of a plurality of applications for computer resources is distributed in different agent processes, all agent processes are not affected due to the fact that a certain agent process is attacked, and therefore compared with a container technology, the security is better.
Fig. 6 and 7 are exemplary block diagrams of two IO selections applied to embodiments of the present disclosure. As shown in fig. 6, the operating system 202, which in one implementation is a single application, includes a file system component (in case of Linux system, the component is Sandboxed Host Linux FS) to support the application process 2021 to invoke a system level read-write operation (syscall) for a file, which is directly forwarded to the Host, thereby completing the read-write operation for the file. In another implementation, as shown in FIG. 7, the application 2021 may be presented with a separate file system, or a storage API that conforms to some standard (Virtual FS/Storgae API). At this time, reading and writing of the application 2021 for the file are completed by the agent process 2022 based on the virtualization technology.
The two flexible IO implementation mechanisms are corresponding to selecting a host-based file system or a virtual device-based IO implementation mechanism.
FIG. 8 is a flow diagram of various process deployments and starts provided according to an embodiment of the disclosure. As shown in the figure, step S01 is to construct a single application space for a specific application and load the code associated with the application into the single application space. In general, a linux operating system is started, a dynamic library file ld.so file of the operating system is loaded and executed first, and in the library file execution process, a series of operations such as device driving, memory management and the like can be realized, wherein the operations include constructing a single application space for a specified application and loading a code related to the application into the single application space. The application-related code includes the code of the proxy process described above.
Step S02 is to build and associate the operating system of a single application with a single application space. So, for example, during the execution of the dynamic library file ld, a loader adapter loader is called to construct a single-application operating system, which may be understood as a virtual machine or a lightweight virtual machine, and is configured to operate a single-application space, i.e., to read, write and execute code in the single-application space. In an alternative embodiment, step S01 stores the code of the operating system of the single application in the single application space, and the container loader loads the code of the operating system of the single application in the single application space to construct the operating system of the single application.
Step S03 is to switch into the operating system of the single application and to start the proxy process by executing the code of the proxy process. When the processor is switched into the single-process operating system, only the single application space can be read, written and executed, so that the single application space is isolated from the host operating system, and the agent process is started in the single-application operating system.
Step S04 is the proxy process executing the code of the application process to start the application process. The application process is used to complete the functions of the specified application. The application process can access external resources through the proxy process, and the proxy process is responsible for auditing and converting the resource access requests provided by the application process and providing the checked resource access requests to the operating system of the single application and the operating system of the host machine. Note that, in this step, the code of the application process may be loaded into the single application space by the proxy process before the code of the application process is executed, or may be loaded into the single application space together when "the code of the proxy process is loaded into the single application space" is executed in step S01.
Step S05 is to start the shadow process. The shadow process is used for auditing the resource access request from the application process and recording the processing result of the host operating system about the resource access request. The shadow process typically builds up exactly the same address mapping data as the single application space.
Based on this embodiment, when the host operating system is initialized, a single application space is constructed, then the code of the proxy process is loaded into the single application space, then an operating system of a single application (similar to a lightweight computer system) is constructed, and the operating system of the single application is entered to execute the code of the proxy process in the single application space, so as to complete the start of the proxy process.
However, it should be noted that the proxy process is started in the operating system of the single application, and the shadow process is started in the operating system of the host, the proxy process and the shadow process cannot directly interact with each other, and both processes must be implemented by the operating system of the host and/or the operating system of the single application.
Based on the embodiment, the access policy control for computer resources is separated from the host operating system, and is implemented in an independent agent process, so that the access control and the resource management are logically distinguished. The application agent can be deployed on the host operating system, a single-application operating system can be established on the host operating system, if the application is deployed in the single-application operating system, the specified application is migrated to other single-application operating systems, hot migration can be achieved, and migration efficiency is higher.
The operating system of a single application is generally lightweight, such as a lightweight virtual machine, and more specifically, may be a virtual machine that, although built by KVM of Linux, is reduced in functionality and limited to only allowing processes related to a specified application to be executed therein.
In some embodiments, the method provided by the present embodiments may be performed by a multi-core processor (e.g., theprocessor component 304 of fig. 3), which may include multiple processor cores that are functionally equivalent or at least use the same instruction set architecture. Each processor core may be in a core state or a user state independently of the other processor cores. Each processor core executes the host operating system in a kernel mode, host operating system instances executed on each processor core cooperate with each other to allocate a single application space for a specific application to be deployed when executed, any processor core is selected as the processor core executed by the single application space by using a scheduler (scheduler) inside the single application space, and code related to the specific application is stored in the single application space. The host operating system executed on a plurality of processor cores uses the internal scheduler thereof to judge, switches to a user state at the selected processor core at the time deemed appropriate, and then enters the single application space to execute the codes therein, thereby starting the operating system of the single application and the process related to the specified application. In this way, the processing performance of the system can be improved by the multiple processor cores respectively deploying and executing multiple applications. As a specific example, some processor cores of the multiple processor cores may be selected as a master processor core, which is specifically responsible for scheduling the allocation of the slave processor cores, the master processor core may execute the host operating system in the kernel mode, the host operating system allocates a single application space and a coprocessor core for multiple applications to be deployed when executed, stores the code related to the corresponding application in the corresponding single application space, and drives the corresponding coprocessor core to access the corresponding single application space in the user mode to execute the code therein.
In some embodiments, a single application space is partitioned into high privilege level and low privilege level spaces, and the broker process can access all the spaces while the application process can only access the low privilege level spaces.
In some embodiments, a single application operating system employs a multi-user management mode, and thus includes at least two types of users: the common authority user and the highest authority user start the agent process by the highest authority user, and start the application process by the common authority user, so that the agent process can access all the spaces, and the application process can only access the space with low privilege level. If the operating system of a single application is implemented based on the Linux operating system, the high privilege level space can be regarded as a kernel space in the Linux operating system, and the low privilege level space is a user space.
In some embodiments, the operating system of the single application readdresses the high and low privilege level spaces such that from the perspective of the application process, the first address of the low privilege level space is 0 and the high privilege level space is not visible to avoid the application process accessing the high privilege level space.
In correspondence with the above embodiments, the present disclosure also provides a computer storage medium, such as various storage devices like a non-volatile memory, a storage device like a magnetic disk, which is usually coupled with the processor and integrated inside the processing apparatus, and when the processing apparatus works, the processor can read and execute computer instructions from the storage device. The computer instructions stored in such computer storage media, when executed by a processor, are capable of performing the following operations: constructing a single application space, and loading codes related to the specified application into the single application space; constructing an operating system of a single application and associating the operating system with a single application space; and switching into an operating system of a single application, and executing the code related to the specified application in the operating system, wherein the specified application comprises an application process and a proxy process which complete the functions of the specified application.
As will be appreciated by one skilled in the art, the present disclosure may be embodied as systems, methods and computer program products. Accordingly, the present disclosure may be embodied in the form of entirely hardware, entirely software (including firmware, resident software, micro-code), or in the form of a combination of software and hardware. Furthermore, in some embodiments, the present disclosure may also be embodied in the form of a computer program product in one or more computer-readable media having computer-readable program code embodied therein.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium is, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer-readable storage medium include: an electrical connection for the particular wire or wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical memory, a magnetic memory, or any suitable combination of the foregoing. In this context, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a processing unit, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a chopper. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any other suitable combination. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., and any suitable combination of the foregoing.
Computer program code for carrying out embodiments of the present disclosure may be written in one or more programming languages or combinations. The programming language includes an object-oriented programming language such as JAVA, C + +, and may also include a conventional procedural programming language such as C. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.