OPERATING SYSTEM ON A COMPUTING SYSTEM
TECHNICAL FIELD OF THE INVENTION
This invention relates, in general, to operating systems on a computing system and, in particular, to enhanced performance in operating systems and methods for managing computer hardware and software resources and providing common services for computer programs.
BACKGROUND OF THE INVENTION
With respect to operating systems, a hypervisor allows multiple operating systems to run on a host computer at the same time by providing each operating system with a set of virtual resources. These virtual resources provide each operating system a portion of the actual resources of the computer. Using a hypervisor, the distribution of computer resources within a single computer makes the computer appear to function as if it were two or more independent computers. Utilizing a hypervisor to allow multiple operating systems to run on a host computer at the same time, however, does have drawbacks. The administrative overhead required to operate the hypervisor reduces the overall computer resources available for running operating systems and the applications. As a result of high administrative overhead and other issues, there is a need for improved operating systems having hypervisors on computing systems.
SUMMARY OF THE INVENTION
It would be advantageous to achieve systems and methods for providing operating systems on computing systems that would improve upon existing limitations in functionality. It would be desirable to enable an operating system architecture-based solution leveraging hardware that would provide enhanced hypervision services in a wide variety of hardware systems and applications. To better address one or more of these concerns, an operating system for a computing system and method for use of the same are disclosed.
In one embodiment of the operating system on a computing system, a hypervisor is provided having a hypervised work space and a native interface to control an underlying portion of the operating system including a system space and a hardware space. A bus accepts a call from the hypervised work space and dispatches an event for processing. A system space arbiter is interposed between the hypervised workspace and the system space and, similarly, a hardware spaced arbiter is interposed between the system space and the hardware space. Each of the native interface, system space arbiter, and the hardware space arbiter are configured to intercept the dispatched event for authentication and context check. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the features and advantages of the present invention, reference is now made to the detailed description of the invention along with the accompanying figures in which corresponding numerals in the different figures refer to corresponding parts and in which:
Figure 1 is a conceptual model that characterizes and standardizes communication functions within a computing system including one embodiment of an operating system according to the teachings presented herein;
Figure 2 is a further conceptual model that characterizes and standardizes communication functions within the computing system depicted in figure 1 in additional detail;
Figure 3 is a conceptual model that characterizes and standardizes communication functions within the pre-boot layer of the computing system depicted in figure 1 in additional detail;
Figure 4 is a schematic diagram of one embodiment of a decryption keys utilized by the operating system within the computing system;
Figure 5 is a schematic diagram of one embodiment of key requests utilized by the operating system within the computing system;
Figure 6 is a schematic diagram of one embodiment of key shard utilization employed by the operating system within the computing system; and
Figure 7 is a conceptual model that characterizes and standardizes particular communication functions within the pre-boot layer of the computing system depicted in figure 1 in additional detail.
DETAILED DESCRIPTION OF THE INVENTION
While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts, which can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the present invention. Referring initially to figure 1 and figure 2, therein is depicted one embodiment of an operating system that is conceptually illustrated and generally designated 10. The operating system 10 resides on a computing system 12 which includes a hardware layer 14, a hardware space layer 16, a system space layer 18, a virtual space layer 20, and instances of other operating systems 22, 24. The operating system 10 may be a desktop or mobile operating system that allows users to ran as much software from existing platforms as possible. The operating system 10 provides the unique ability to ran multiple hardware- accelerated operating systems simultaneously, such as operating systems 22, 24. A pre-boot layer 26 is below the hardware space layer 14 and a sandbox layer 28 is above the virtual space layer 20. As shown, in one embodiment, a kernel 30 having a virtual space 31 has a built-in hypervisor feature called a hypervised workspace 52 (HVWS 52) that allows users to install other common operating systems along-side the operating system 10, without dual-booting, in order to ran software from other platforms, such as productivity suites, games, or browsers, for example.
Another important aspect of the operating system 10 is its awareness of devices in the area and inclusion of a wireless mesh networking with distributed consensus in its design. Machines interconnect with other machines in a distributed, peer-to-peer fashion that allows for the creation of domains, and sharing of computing resources. This can happen over the Internet, on the other side of the world, over the mesh, or on a LAN.
The operating system 10 may create virtual LANs that multiple HVWSs 52 may share. This allows users to create domains with shared resource pools and have the other machines on the network agree upon the state of the domain and pools. Virtual LAN adapters are also used in mesh networking, and the distributed protocol, which means that the machines will reach consensus about shared and encrypted resources without the need for inter-office VPN bridges.
With respect to the pre-boot layer 26, this portion of the computing system 12 lies outside the operating system 10, yet it is extremely important to a workstation utilizing the operating system 10. As long as the bootloader remains in-tact and cryptographically secure, then it will be difficult for a modified bootloader to steal the user’s password. The pre-boot code is responsible for the initial invocation of the kernel, and for decryption of the system partition, which gives the operating system protection against outside attack. The operating system will verify the integrity of the bootloader, that it matches a known-good loader’s SHA256 checksum, for example. Each additional boot token may store a copy of the checksum for the purpose of verification from an external perspective as shown in figure 3, which depicts one operational configuration and embodiment of the pre-boot layer 26.
Returning to figure 1 and figure 2, in one embodiment, a loader sits directly after the Master Boot Record (MBR) and is invoked by the BIOS or UEFI of the user’s mainboard. Ideally both the extensible firmware interfaces and traditional BIOS boot code should be implemented. For maximum compatibility with desktops, laptops, and phones, additional loader code may be required on a vendor-by-vendor basis. This is the first real program that is executed by the system.
A configuration store (Config) within the pre-boot layer 26 is a human-readable database that maintains records of the system for the purpose of integrity. At the time of install, the database is populated with the machines original or factory configuration, including the unique identifiers (serial numbers) of the drives, motherboard, BIOS/EFI version info, and any removable devices. Checksums of the boot block and any efi bootable binaries stored in /boot/EFI for integrity, and in general, an account of the system as it was first installed. At the time of boot, part of the bootloader’ s responsibility is to warn the user when the configuration has changed, coloring anything that is changed in RED.
Snapshots are updated in the last_boot section of the config, which houses the same info from the previous boot. Bootloader configs can be persisted on removable tokens for extra security.
The status of the config store validates:
• What drives are plugged into each SATA port or other interface on the board, and that their positions have not changed.
• Other hardware changes since install and last boot.
• The status of the boot image, and that it is not corrupted.
• A list of valid boot tokens.
• The status of the previous boot, when that occurred, and whether or not it was a success.
• Existing horizontal boot options. (Is there a new OS or EFI image?)
• What HID (Human Input Devices) are present at boot. (For example, there may be a generic driver for a keyboard, but for some reason there are multiple keyboards plugged in that the user did not know about.)
As its name suggests, the config store also houses configuration parameters and flags that are to be passed to the running kernel, the booting kernel derives its configuration from the signed boot image. Up the chain signing can be used to verify the integrity of the boot process provided that the base itself is signed. The user possess a set of keys that they can use to sign their own software for the booting layer, and each module is listed in the bootloader against the config for the user to see that it has or has not loaded.
A detection process accounts for hardware and software on the system to detail in the config. Besides additional operating systems being defined in the boot.conf, most operating systems can be detected by looking for BootX, Grub, LiLo, and the Windows Loader on existing internal and external storage devices. The point is users have the option of installing the operating system alongside another operating system if they so choose. The bootloader uses automatic detection, to allow for the booting of other operating systems, or the user can make manual bootloader edits in the configuration hive. Additionally, on EFI machines implementing the UEFI standard, the user has the option of dropping into the EFI Shell, raising visibility of EFI boot programs and protocols, as well as managing existing EFI programs and partitions. Any boot/EFI partitions and any compiled EFI binaries that could reasonably be booted by the system should be listed there.
With respect to authentication, in one implementation, a unique feature of the operating system 10 is the ability to use a mobile device, or other IoT device, as a secondary factor in unlocking encrypted drives. This allows users to, encrypt their drives, and unlock them with their phones, and an optional secondary factor (2FA) password or flash token. This technique provides data integrity, as in, the bits on the encrypted portion of the disk cannot be altered in a meaningful way, without knowledge of the encryption keys. It also provides data privacy, that is, if an adversary were to remove your hard drive, and analyze the data with an external hard drive dock, they would need the decryption keys in order to recover your sensitive files.
The authentication component of the bootloader uses drivers from the boot image, which is a collection of Boot Mods that can optionally be exported by The operating system drivers and were installed with prior authorization from a boot mode arbiter 27. This layer represents the highest privilege tier (ring 0) on the system. The collection of drivers might include disk drivers and even network for the purpose of wireless secondary factor authentication, where the token device uses asymmetric encryption to sign a message containing part of the keys to the booting device or to request a TOTP token (password that changes with time) from the user’s device in an automated fashion. Unless a network is specifically required at the boot layer, network-aware drivers are forbidden and/or disabled by default. However, it should be noted that the EFI standard itself supports network, and it may be relevant to embed parts of this process into an EFI program on EFI systems with the following flow:
Boot Space -> Hardware Space -> System Space -> Virtual Space -> Work Space
Both of these aspects, data integrity and privacy, work to enhance the overall security of the operating system 10 installation. This yields an operating system that may be impervious to many physical attacks designed to steal data from your hard drive. If the bootloader is replaced, then the signatures will not match after the drive is unlocked, nor will the signature match others in the domain. If the memory modules are removed in an‘evil maiden’ -style attack, then the keys should be encrypted in memory and scattered across the physical RAM sticks. This means that the time that the attacker has to recovery the keys is reduced as they need to collect key fragments successfully from multiple RAM sticks at random offsets as depicted by encryption keys 100 in figure 4. Machines utilizing the operating system 10 will come to consensus about domain objects and keys by pairing off into manageably- sized pockets, where each pocket might elect a leader to participate in inter pocket consensus elections for deciding the validity of information at the global level. Many distributed consensus algorithms rely on a significant amount of cross-communication between nodes, and so pockets and domains help to break the problem down into sizable pieces so that there is not this problem of way too much network traffic.
Returning to figure 1 and figure 2, with respect to an unlocking mechanism, the bootloader allows for the decryption of the encrypted hard drives in multiple different fashions. The user has the option to remember a password, use an external boot token, or pair a wireless device in order to unlock their hard drive. These options might include the use of multiple layered or interlaced encryption ciphers, including the Blowfish block cipher, AES, and Anubis, in order to encrypt partitions. In order to decrypt these partitions, the user can select from multiple methods, including typing a plain password, to order to start the machine.
If the user chooses to use password authentication to unlock their hard drive, the password will be salted, and digested by the SHA algorithm, which will produce a 256-bit encryption key in a manner similar to the PBKDF2 derivation algorithm. The encryption key is used to unlock the primary disk, as well as other encrypted drives that happen to use the same key. The salt is derived from the current system timestamp, the nanotime, which comes from the motherboard CMOS clock. The value may also come from cryptographic hardware, such as a TPM chip, the Trusted Platform Module, which serves to store cryptographic keys and certificates, and uses certain cryptographic operations on many newer boxes that happen to have this hardware. If the hardware is present, the option to store parts of encryption keys in the TPM should also be present.
Specifically also, the certificates for signed bootloader can also be stored here or registered with UEFI as user keys. The user should have the option to clear these keys from within the bootloader. The salt can be stored in the header of the encrypted disk, as it is one piece of information that is required to decrypt the primary partition that is not secret, the purpose of the salt is to prevent the usage of rainbow tables in cracking the encryption keys, as unique tables would have to be generated for each salted password separately. The user also has the option to use their smartphone to unlock encrypted hard drives. This means that the user’s phone contains part of the block cipher encryption keys require to unlock the disk. The phone must implement the same protocol as the operating system, and might also be running the operating system 10, but this is not required if the user chooses to use an authenticator app that emits network frames. The authenticator app, or ‘platform application’, has a public/private keypair that is known and trusted to the operating system’s bootloader, and is seeded with a cryptographically-secure random number generator (RNG).
While encrypting a partition, the user has the option to specify the key fingerprints of devices that will be used to unlock the hard drive, via a pairing process, and any key shards, or temporal seeds are loaded onto the device by means of NFC, IR, QR codes, WiFi, Bluetooth, networking in general, manual typing, or some other method. This pairing process involves comparing codes on both devices, and the token device saving an encrypted copy of the hard drive key, after receiving it over WiFi or Bluetooth.
When the user wishes to unlock their hard drive, they simply bring the token device within range of the booting device, and a dialog will pop up on the token device’s screen, asking for an optional code, before the drive key is sent back to the booting machine. The booting device asks for the token device by its identifier, and the token device responds with a cryptographically signed message, boot_ok, and boot_key_shard, back to the booting device as shown by key flow 110 in figure 5.
Returning to figure 1 and figure 2, the operating system 10 includes a bootloader utilizing multiple techniques to protect user data. One technique is known as on-the-fly decryption, which means that the loader will be decrypting hard disk sectors block-by-block, (as needed), instead of all at once. This allows the operating system to be mounted and run as if there was no encryption at the physical layer. On-the-fly encryption and decryption is common in the art, but the operating system machine approach the problem with extra support for more ciphers, and entropy seeding algorithms. BBS (BlumBlumShub quadratic residue generator), Anubis, Blowfish, and Whirlpool hashing are among these specifically. During initial encryption, from the inside the operating system itself, or the installation program, the user’s password is turned into a hash. The password is converted into a salted hash, with a nanosecond timestamp, and hardware id combo:
Sha256(salt, password, iterations, nanotime, length):
79ff5365c8d92f503bb34eablde9bf7fe505dlc09lb4d8c2ac9e6cfd89bf94eb
The block cipher that the drive is encrypted to should be peer-reviewed and audited in order to protect consumer privacy. In addition, the drive encryption source code, should be made publicly available (to instill confidence in a peer-reviewed crypto system) This is because crypto is best peer-reviewed, and not home brewed. Some notable block ciphers include, AES, Two-Fish, Serpent, and Blowfish. (In both CBC and XTS modes) Notable hashing algorithms include SHA 256 and Blowflish and would be suitable for this purpose.
After the system disk has been unlocked by the bootloader, the keys in memory must be protected, in order to thwart against cold-boot attacks. During a cold-boot attack, an attacker might sever a target computer’s power, extract its memory modules, and recover the encryption keys with an external card reader. Typically, this is done at stabilized temperature. The problem is that the bits on a RAM module aren’t erased immediately after power is severed, and there may be no good way to erase sensitive memory without a BIOS- level feature that wipes the modules while the system is powered off. This could be accomplished with a CMOS battery backup. Not all platforms are the same.
The bits on the memory sticks themselves degrade more slowly when exposed to freezing temperatures, and ordinarily hold the disk encryption keys. Thwarting these attacks will mean making the keys unrecoverable to the adversary.
The approach of the operating system 10 may involve encrypting keys in memory and scattering them across multiple physical RAM modules to reduce the chances of a successful key recovery. At each boot, the bootloader program selects new random offset for the keys, which are stored in a memory page inside the kernel’s memory that is also at a random location. This makes it so the attacker first has to locate the page containing the offset information, and then read from each of the offset points to get the complete keys, thereby increasing the complexity will make these attacks a little bit more difficult, and give those bits some time to degrade.
The kernel 30 and encrypted bootloader will thwart many types of memory extraction techniques by storing encryption keys in randomized memory locations, with memory decoy pointers, and additional plausible pointers that further impede key recovery. Besides memory randomization, the operating system uses a technique first known as TRESOR, which stands for "TRESOR Runs Encryption Securely Outside RAM", for secure kernel-protected key storage in the x64 CPU registers themselves.
The components required to perform in-CPU register storage of the encryption keys can be loaded into the kernel 30 from within boot space. This is the most privileged part of the system where applications are allowed to flash hardware and make changes to drivers. Since the operating system users should not log into hardware space or system space directly, developers can perform privileged operations in hardware space through a hardware space arbiter 41. However, accessing boot space requires that hardware space driver modules export boot mods that are copied into a boot image at the time that the kernel is installed, upgraded, or updated. This requires permissions from the boot mode arbiter 27 for ring 0 access to the system.
The hardware space arbiter 41 may pop-up a single or multi-paged dialog detailing the permissions required, and reasons for each permission in summary. The user then has the option to grant or deny these requests, which will determine if the hardware space commands are allowable. The boot space arbiter requires that the user grant permission for their kernel and /or boot image to be modified.
Since these confirmation dialogs arise in a protected part of the operation system, other applications will not be able to draw-over or interfere with the privilege elevation process. As users grant privileges to applications, specific privilege groups may be automatically granted in the future, or the users might device to revoke access, which would prevent the application from modifying the base of the operating system. The user must break out of the sandbox in order to read from the registers, as shown by key shard implementation 120 in figure 6.
Returning to figure 1 and figure 2, the hardware space layer 14 of the kernel 30 is where the system takes shape. It is also sandboxed from the rest of the kernel 30 and is accessible with the permission of the system owner. Much of the core of the operating system is found in hardware space layer 14. The architecture of the kernel divides system permissions into three major permission categories: hardware permissions, system permissions, and virtual permissions.
With respect to hardware permissions, hardware space represents the lowest components of the kernel 30, and includes the entire base of the operating system, and hardware drivers. Hardware space is required for the system to perform and provides a low- level interface for each piece of hardware connected to the machine. With respect to system permissions, the‘system space’ or system abstraction layer, sits on top of the hardware, and provides a virtual interface for each hardware component. This will simplify the understanding of each hardware component, and create a secure base for the hypervisor, that doesn’t care about drivers so much as the abstraction of virtual resources to individual virtual machines. For example, there might be two SATA drives attached to the system, and the hardware layer understands one device as an SSD Block device, and the other as a USB Mass Storage Device and is attached to the bus. The hardware layer would identify the USB device separately from the SATA device, yet, the system layer will see each device as a type of disk, and offer it to the virtual layers of the operating system that will share disk as a resource.
With respect to virtual permissions, it is important to mention how these layers of the operating system fit together. Virtual-space is where virtual machines are nested, and it draws in resources from the system space. In general, virtual-space is where hypervised workspaces are created, sandboxed, and concurrently executed. This means that apps running in separate workspaces cannot interfere with one another. In order to communicate, they must do so with the operating system API.
As shown in figure 1, figure 2, and figure 7, with respect to hardware drivers 34, drivers for the operating system 10 are depicted spanning system and hardware space layer 14. Drivers are an important part of any operating system, especially those designed to run on a plethora of existing hardware in the IoT ecosystem. Another important aspect of any driver is its ability to interact with the hardware layer on a very low-level. The operating system 10 ensures that this capability is available to users with privileged access to the system who can confirm that this is what was intended. The system may even ask the user for their password or secondary factor, (which includes other devices), that will help to verify that the administrator of the system is present, and not some imposter.
With respect to hardware modules 36, when developers write programs for the operating system 10, they have the digression to specify permissions on an as-need or initial basis. In the as-needed case, the app would ask for permissions when the user does something that requires them, and a callback event would fire when the user hits grant or deny. This allows the developer to code for both cases. Other apps might request permissions up-front in order to function, or the developer would rather ask the user up front. When modules are installed, there is an auditable list of native application program interface calls that the module is authorized to make, which is something that advanced users can look at to see what the module can do, even if they do not have the source code. These permissions would be specified as part of the application’s metadata and would automatically be added under certain circumstances.
If an application contains a driver, the user will need to grant the permission by re- authenticating and answering a low-level system dialog that cannot be drawn over or manipulated by any other program. Many drivers will contain hardware modules that define how the operating system should interact with devices. Many system modules 38 will abstract the functionality of these the devices for use at the hypervisor level.
In figure 2 and figure 7, several hardware modules 40 are shown. It is also shown that a hardware module does not require a system module if an abstraction already exists. If the chipset hardware module is for a USB interface, then the operating system 10 already has an understanding of what a USB port is at the system level, and there is no reason to bundle a system module. The hardware module simply specifies that the device is a USB interface, and the USB sysmod provides a default abstraction, and expects certain methods and capabilities to be available.
Also shown, is a hardware module for a video card that includes a sysmod. The operating system 10 already has an understanding of the concept of a‘gpu’, however, the vendor chose to create an additional sysmod to go above and beyond the‘gpu’ abstraction. This allows the vendor to offer additional functionality, such as augmented reality rendering, or an independent processing platform like Nvidia’s CUD A. Sysmods are important, because they make these resources available to the hypervisor, and thus, to the sandboxed parts of the operating system.
Users may have chosen to install a 3rd party OS inside a hypervised workspace 52, in order to play their favorites games, with access to advanced resources like CUDA would need to be made available. The sysmod will mean that the hypervised operating system will be able to make full use of the graphics card, and additional CUDA-based math. Users should not need to dual-boot in order to play games on the 3rd party OS, and would instead, run the entire OS, and all of their software, inside of the hypervised workspace 52. The user could do the same with many kinds of operating systems such as Windows, Linux, Unix, or MacOS. The true benefit of hypervised workspaces 52 is that each virtual machine instance can have hardware acceleration that is good enough for video editing or playing games.
Each guest operating system may require its own drivers, such as a graphics card driver. This is not a problem because the operating system and its hypervisor now understand the hardware, and the guest OS will detect the exact model of graphics card, such as“Nvidia GeForce GTX 560TI”. The benefit of exact device detection means that the hypervised instances can use the stock Nvidia drivers without any modifications, and truly believe that it is being ran on native hardware.
In the operating system 10, the solution is to allow vendors to define modules for the hypervisor (in system space), that specify which components and features should be exposed to each guest (workspace). It might be the case that some guests don’t have proper driver support for proprietary hardware and would prefer a“Standard Graphics Adapter” instead. Users should be able to choose which driver sub-system is used for individual guests, so that the best virtual hardware that the guest understands is always available. The operating system 10 seeks to create a modular driver framework that allows developers to write drivers that are compatible with virtualization, and can be exposed directly to, or abstracted to HVWS to offer the best functionality and performance for a given platform.
Even though components in hardware space are running from within the most privileged part of the system besides the booting layer, programs will take on the privileges of the caller. One cannot import these components from user space, and escalate privileges into hardware space, nor can they break outside of the caller’s sandbox without a grant from the correct arbiter. Likewise, when modules are imported, they are invoked with the permissions of the importer. Hardware space includes a system base which is broken up into sub-components with several permissions groups. These sub-permissions groups or‘risk- factors’ further separate what modules in hardware space can and can’t do, even though all of the code is ran in the most privileged space on the system. The goal is to provide further granularity by risk factor. For example, if a developer was writing a WiFi card driver that interacts with WiFi frames coming from other machines, the developer could mark that there is a risk of the module becoming compromised, and Nebula would isolate the module from others in hardware space. This goes hand-in-hand with the original request from the arbiter to install the module, which identified each method that the module is allowed to call. Without a re-authorization, the module will only be allowed to call those methods that it was installed with. This helps to protect the user from injected or foreign functionality, because the attacker will be limited to those methods. System administrators might go back and view details about each module and remove modules that go outside of their own constraints or policy.
Even though the drivers and code are written in a high-level language with low-level features available, there is still significant risk of logical bugs, malicious firmware updates, unsanitary data, and the like. The benefit of a high-level language is that in-memory attacks become more difficult, and buffers do not overflow in predictable ways, and yet, bugs are always possible. Mitigating buffer overflows system-wide results in a much more secure platform, but will require a huge development undertaking.
If a module tries to access any other module that wasn’t explicitly defined by the developer’s own signed code, then access would be denied. A common problem in security, is that attackers will often hijack programs, and use them to ran their own malicious code. Setting this requirement will mean that the attacker is limited to calling API functions that would can only be accessed under ordinary circumstances.
The hardware space layer 14 includes the core system and drivers. The compiler and standard library, are used to build the entire kernel that will form the base for the rest of the system. The structure of the kernel API is designed to be as easy-to-use and possible. Any hardware modules can be written in the Pythonic language, which is simple to follow, and compiled for performance. The core of the operating system 10 needs to include support for the underlying hardware itself. Before many of the hardware drivers can be loaded, the kernel needs to have a basic understanding of each hardware component on the machine’s main board.
At the lowest level of hardware space, there is code to handle allocating resources for programs and interactions with the system’s BIOS or EFI interface, and for creating file handles (file-like objects) the represent devices, sockets, files, virtual adapters, and otherwise. This is similar to how the Python language handles open- files and sockets, except in The operating system, other hardware components are also represented by file-like objects on the low-level. Unlike Linux/Unix, these file handles are not‘mounted’ in the filesystem, for example on a Linux machine, you might have a hard disk drive“/dev/sdal” that shows up as a file in the file-system browser or directory listing. In the operating system 10, hardware devices are available from the kernel APIs, and used for the system and fed into the hypervisor with the system-level device abstractions. One would import the correct module, and call a listing method, to get a list of hardware devices in the category.
The ‘base’ module in hardware space allows the system to interact with the motherboard’s BIOS (Basic I/O System) or EFI (Extensible Firmware Interface). This might include flashing hardware components, including upgrading the system BIOS to a newer version from an update file supplied by the vendor. This is considered to one of the greatest, if not the greatest, privileges of the operating system, as the permission to flash hardware firmware, allows for changes the layers beneath the operating system 10. The operating system 10 cannot do as good of job protecting the user, if their underlying hardware is damaged, or infected by a malicious firmware update. Typically, many hardware components have their own internal operating systems, which further add to the complexity of securing a computer system.
In addition to flashing, there are also many system related functions that the kernel must handle. This allows programs to subscribe to low-level functions such as a reset or power button presses. The system should be able to interact with certain low-level hardware that might also have a BIOS-level control interface. These components allow the operating system to enumerate hardware, detect the screen, and allocate the initial memory for the kernel. There are different kinds of hardware out there, and some proprietary, such as Apple hardware, that has a specific boot-code before it will allow the booting of an operating system. If the operating system 10 intends to run on Apple hardware, and depending on licensing concerns, the boot code might need to be implement in order for The operating system to boot MacOS in a HVWS. Likewise, if the user has an extensible firmware interface instead of a BIOS, then the boot process differs, and the system must register itself to the EFI in order for The operating system to boot. The easiest way to do this is simply having a FAT partition with a bootable EFI image at “/booEEFE” as per the UEFI specification.
The operating system 10 must also know about the what devices are attached to the buses of the motherboard. Various buses exist on a modem motherboard for interfacing with PCI devices, Memory, Drives, and the CPU. The system needs to know about all of these. In addition to this, some devices attached to these buses, are actually controllers, such as a USB Controller. USB is actually handled separately in the USB module, but the system needs to know about these primary buses first.
On most motherboards, the CPU is connected to the Front Side Bus (FSB), which is connected to the North Bridge. The North and South Bridges communicate over an EO Controller Hub, which is another kind of bus. The system memory, PCI-E (PCI Express), and AGP video, are all connected to the North Bridge. Finally, the South Bridge is Connected to the normal PCI card slots, IDE, SATA, USB, Ethernet, Audio Encoding Chips, and CMOS. On the South Bridge, there is also a Bus for flashing and/or interacting with the BIOS, called the LPC Bus. The LPC bus is connected to Serial and Parallel Ports, Floppy Controllers, and PS/2 input devices like a keyboard or mice. To make matters a bit trickier, USB keyboards and mice are also handled separately by the USB subsystem. LPC stands for “Low Pin Count”.
The operating system stack also requires the ability to interface with hard disk drives and solid-state drives. Although raw data may have come off from a SATA controller that was enumerated by a different part of the operating system, the basic understanding of disks and file systems comes from hardware space. In system space there are abstractions for virtual drives, and in virtual-space, there are virtual controllers for each kind of device. Once drives are detected, the operating system keeps representational objects and metadata in memory for programs to access with native application program interface.
In order to mount the filesystems on disks, the operating system needs to have its own implementations of EXT4, both should support as many common filesystems as possible. (Ex: UFS, ZFS, FAT16, FAT32, and NTFS.) The root mount point, on other systems‘V contains the operating system files, and likely be installed on the system partition. Once the root mount point has been mounted, configuration values can be pulled from the disk. The hardware space level configuration store has configuration values for each hardmod that exports its own set of preferences. The operating system 10 has its own unified configuration hive that saves configs for hardmods, sysmods, and apps. In each case, specific permissions are required to read from and write to configs for each type of entry. For example, there might be a hardware tier of configs that corresponds to hardware space, and can only be accessed with the hardware user’s permissions. Realistically, in one implementation, these calls would occur through the native application program interface, and the system would prompt the user for re-authentication, before execution. The calls themselves can also be made from inside the virtualization sandbox, so it is important for the system to audit them.
The developer should be able to export configuration values that users can change them with a GUI, TUI, or CLI. (Through the Settings Manager) The system will translate these into human-readable configuration files that are stored in Vconf . Other programs can write their configuration files in Vconf , but the configuration system has APIs for registering and unregistering entries that will appear in the menus for the user. This means developers can write drivers and apps for guest operating systems that unify the operating system experience, and utilize native calls to run programs that need access outside of the virtualization sandbox. This is similar to‘/etc’ on GNU Linux, or the registry on Microsoft Windows. When you look into it, these are really just a collection of formatted JSON, YAML or otherwise human-readable files stored on the disk. The system includes an index that describes the names of each program, and the configuration files that belong to each.
In one embodiment, the operating system uses features from the Pythonic interpreter along with system environment variables, and a user profile to build shell sessions from the system environment variables and user profiles. This means that the users can run commands at the CLI or textual user session from inside the sandbox. These commands are executed when the correct permissions are met. Some low-level permissions are very scrupulously scoped out to only be available from certain parts of the system. Developers can write program scripts that are fed into the interpreter, or fed into the compiler to produce binaries.
Before the user can access this shell interface, the input subsystem needs to be available. The user may have a keyboard or mouse connected to the system that must be enumerated, before the shell interface would be usable. Certain headless systems may not even have video chipsets, but could still run the operating system. For these embedded platforms, there’s a serial or UART interface available that would allow developers to write software for these devices as well.
In this case, the boot process, and CLI are made available, except there is not graphical subsystem. Thus, it is the case that the total installed operating system is very slim. Part of the Input module depends on the serial module and would allow for this to occur. Other than that, the system supports remote VNC and Shell sessions, and remote input through VNC must also be supported by the input subsystem.
When user authentication is required, the user is expected to demonstrate that they have system-level permissions. This can be by typing the system password, or biometric, and might be supplemented requiring a USB flash drive or wireless device. The system is using a random salt to seed the SHA-256 hash of the user’s password, or another saved fingerprint. The‘auth’ subsystem also provides a keyring that is protected in memory. The keyring provides a place for the user to store named public / private key pairs for various applications that use cryptography.
A program could generate an RSA keypair, and store that keypair inside a password- protected keyring on the disk that only gets loaded into memory when needed. The application can specify that the keyring password must be different for a particular application. Developers can specify that an application’ s storage should be unlockable with the user’s account password, or with a separate supplied credential, if the user wants to make the encryption different for some of their apps but not others. The user might want to password protect apps themselves, which will encrypt the application sandbox to a key stored in the user keyring.
The security module is the part of the system that ensures kernel security and integrity by monitoring system calls, looking for calls, or native application program interface calls that are in violation of a strict set of policies. Many features in the operating system require cryptography. While programs may have their own cryptographic implementations, the operating system 10 provides many common algorithms ranging from hashing algorithms like MD5 (legacy), SHA, and Blowfish, to common symmetric and asymmetric encryption ciphers. (AES, RSA, DSA, Elgamel, DH, and other schemes). It also provides access to encryption schemes like Password-based Key Derivation Function (PBKDF1 and PBKDF2). The crypt library’s main purpose is to provide known-good implementations of common cryptographic algorithms. These implementations should be peer-reviewed and export an easy-to-use interface for developers.
The native crypto library provides both Secure Random Number Generation (SRNG) techniques, such has quadratic residue generation that can be used for cryptography, and PRNG (Pseudorandom Number Generation) techniques, that are used when the user just wants a random number. Part of this process involves entropy generation, accumulating the microstate of the hardware, and continually running it through a mathematical trap-door or hashing function, and then using it in a progressive modulus division where the output is fed back into the next iteration. Finally, virtualization support is critical to the graphical components of the operating system functioning correctly. In short, the virtualization module is used to abstract the virtualization capabilities of the CPU and depends on the‘cpu’ modules having already been started. Extended processor features like Intel’s VT-x and AMD-V are abstracted into a single system where blocks off instructions can be scheduled onto the processor to be ran as part of the hypervisor without making different calls for VT-x or AMD-V.
The hypervisor uses a process known as ‘context switching’ to quickly change between virtual machines. It would appear that multiple workspaces are running at the same time, when in actuality, the hypervised is just switching between them very quickly, taking into account execution priority as it does. Multi-threaded processes do run blocks concurrently, which means that the hypervisor can schedule additional work to be executed simultaneously.
The ‘video’ and ‘sound’ modules also provide enhanced hardware acceleration capabilities. Vendors write hardmods and sysmods for their video and audio hardware, which are then detected by the‘video’ and‘sound’ modules. The hardmods provides a hardware space driver for audio and video hardware, and sysmods provide system space services that provide an abstraction to the driver hardmod for the hypervisor. Certain drivers might permit the resource to be shared between workspaces, and others might require the hardware to be directly exposed to the hypervisor. This means that these devices can be accessed through native application program interface calls, or natively with a guest driver, and are abstracted this way in order to provide the most functionality for dedicated graphics and audio hardware. It depends upon the hardware and drivers.
The radio module implements a common understanding of radio protocols like 802.11 (WiFi) and 802.15 (Bluetooth), whereas the‘net’ and‘socket’ modules allow for network communication all together. However, the radio module provides a WiFi and Bluetooth interface to the layers above, which is particularly useful to developers and network engineers, as they have the ability to debug the network stack. Certain parts of the API are used to create TCP or UDP sockets, including privileges required in order to open raw sockets, which would provide the ability to collect network traffic from other programs on the system. The opened socket itself is an object that is usable from inside the application sandbox.
Native application program interface commands are scheduled onto the hypervisor, broken down into threads, and executed on the underlying hardware, where the hardware and system space modules run the privileged or non-privileged operations. Technically, this breaks outside of the virtualization sandbox, but it allows guests to run with full hardware acceleration from inside the virtualized environment. The system’s arbiters are responsible for deciding whether or not to allow NAPI calls.
Depending on domain policy, user privileges, the arbiter will deny, delay, or execute native calls. Apps can do whatever they want inside the sandbox, but as soon as they request native resources, or make commands that work outside of the sandbox, an arbiter will verify the transaction with the user. The security module audits the event as a grant or denial. This makes the data easily available to protection software, as an antimalware solution could take the events and search for malicious behaviors. Users are encouraged to create domains and created shared domain resources. The mesh protocol provides a way for nodes to share computing resources with one another after pairing. The operating system can create domain resources where the resource is actually split amongst multiple machines.
Machines can pair when two devices have their screens on, and one user does the flip gesture, which is to hold the device facing the other device and flick the wrist. The two people verify that the codes on the screen are the same, and the other person hits okay. Under the hood, this is a cryptographic key exchange that occurs where an asymmetric algorithm is used to establish a temporal session key that the two devices will use to share data. Public-key certificates are stored in the authentication store after a successful pair. If the other user allows for pairing over the Internet, cryptographic sessions can be performed over the Internet, and not just the Mesh. The difference is that traffic can traverse the Internet as well as the mesh network to get to its destination. There is no expectation that machines have to participate in mesh traffic forwarding, but it is enabled automatically for explicitly defined domains, such that the system administrators can choose a portion of resources to contribute to the domain, and domain files and resources, will be sharded across those machines. The users may need to manually verify their key fingerprints or use a trusted 3rd party to maintain copies of each user’s public keys in relation to individual device fingerprints.
The System space is a part of the kernel 30 where the hardware-to-virtualization abstractions occur, and where system services live. A system service is a process that starts on its own schedule and runs in the background as a specific non-hardware user. They are often invoked upon boot, although, this is not required. System space has many sysmods (system modules) that export virtual hardware for the hypervisor. Hardware can be assigned or shared between workspaces in the Virtual Configuration Manager.
System space also has antivirus and firewall services that scan for malicious files and control the rules in the lower parts of the stack in hardware space. Changing rulesets in the default system firewall, via a NAPI calls, or from another sysmod, will result in the antivirus process triggering a hardware space confirmation dialog for altering the firewall rules. However, users can choose to allow the addition of a trusted services. Trusted services are allowed to access certain hardware space whenever they need, but the user must re- authenticate at the time of installation, and a trusted signing authority has to have verified the integrity of the module.
Users can install unsigned modules but, in one implementation must wait for a 15 second countdown timer on-screen warning them that the installation of the unsigned modules could completely destroy their system. Once the module has been tested by a trusted authority and signed, the system will treat it has safe without this additional warning.
The service configuration remembers the SHA256 checksums for each service’s binary and will not ran the service if the checksum changes. That being said, the service installation process will uninstall previous services with the same name, and each time the service binary is updated by the vendor, the checksum changes to match the new binary. If the service updates into an unsigned binary, then re-authentication would be required, otherwise that service can update itself automatically. The requirements are that trusted services must be signed by the vendor, installed after the user confirms the hardware space dialog from the arbiter, and can be removed or configured not to start at boot, at any time.
The default abstractions for antivirus and firewall are extended by developers who wish to make their own antivirus products. Developers can choose to extend the antivirus and firewall APIs to build their own anti-malware software, or other software that requires constant hardware permissions to ran. This means that the users can choose to install 3rd party antivirus solutions and other privileged software, but they must have permissions to do so.
Most of the application program interface lives in system space and has many native methods for developers to use in their programs. Before events are executed, the events travel from the caller (a Hypervised Workspace VM) to the Virtual-space native application program interface (NAPI) bus. Workspaces dispatch NAPI events onto the bus, and can be subscribed to by one or more system space modules (sysmods). A NAPI subsystem 42 itself will subscribe to these events and handle their execution in many in separate threads (a configurable number) If there is load-balancing enabled, some events may not take precedence over others. There may also be an execution delay imposed, which means the system will wait to act on events for a set period of time. This give the system administrator the ability to intercept system calls as they are happening, even though the calls are really just delayed by a preconfigured timeout period. A program might try and open a TCP socket to communicate on the web, but with an execution delay in posed, certain events will be executed after the delay for debugging purposes.
Some system calls result in the need for a direct communication with the caller. For example, if the user opens a file, a file handle is returned by the API method, and must be asynchronously accessed as a hypervised resource. Hypervised resources are like virtual hardware resources, except that they account for specific regions of virtual memory where the execution takes place. This means that Hypervised Workspaces gain the full performance of the underlying hardware, because after dispatching native execution events onto the NAPI Bus, the calling program can work with the created objects and resources in memory from inside the hypervisor.
System modules 44 lay at the system space layer 16, and are very similar to hardware modules in hardware space layer 14. System space modules 44 exist to provide a bridge between hardmod drivers and the hypervisor. These modules define virtual hardware resources that can be assigned to one or more workspaces. This way the vendor also has control of how their hardware interacts with the hypervisor, which provides the best performance and experience to the user.
From the perspective of the hypervised workspace, the virtual hardware resources appear as physical hardware. In actuality, the hardmods and sysmods provide an abstraction that creates this illusion. Sysmods export virtual device objects that let the hypervisor identify new types of hardware that users can assign to each virtual machine. They also handle interaction with the driver for each device in terms of the hypervisor. This is means that installed drivers to open up native end-points for hardware, which lets the hypervisor gain full hardware acceleration, and handle context switching appropriately.
The audio subsystem provides an abstraction for hardware audio devices on the system and includes an interface for playing audio snippets and streaming audio. It includes a more general understanding of audio hardware, and exports virtual hardware that can be attached to specific workspaces in order to provide audio support. Depending on the user’s hardware and driver configuration, the sysmod will either export its functionality through the generic audio interface, which will show up as a‘Generic Audio Adapter’ or will export the exact hardware ID and functionality, meaning that it would show up as the vendor had intended.
The‘serial’ module is used for mounting serial devices onto Hypervised Workspaces. Some workspace users may have older serial devices that would be connected to the COM ports, and there are some advanced users who might be trying to serial into a switch, in order to configure it for their job. The main task for any sysmod is to export a virtual hardware device to the hypervisor. This could happen for many purposes, but in this case, the serial ports on the physical hardware need to be virtualized.
The shared module gives users the ability to share folders with the virtual machine. Other virtualization software has this feature, and the approach taken in is meant to be as trivial as possible, also more reliable. That being said, the‘shared’ sysmod still provides a virtual hardware resources to Hypervised Workspaces, except the virtual hardware registers as an imaginary storage device. It is attached to the workspace as a named filesystem with a custom unique identifier.
With respect to the disk modules, each hypervised workspace can have one of more hard drives. These drives could be virtual IDE and SATA disks that are saved in a flat-file drive format, or entire physical disks, that are to be mounted inside specific workspaces. In the former case, the user has to option to specify which interface or controller to use for the disk. The user might choose IDE, SATA, or SCSI in order to optimize compatibility with the guest operating system. When a workspace is powered on, the virtual drive is exposed to the virtual BIOS, and the hypervised workspace boots. In the latter case, the hypervisor will request hardware access to an entire disk, which results in an entire physical disk being exposed to the hypervisor, for use in workspaces. It also allows attaching hard disk drives and solid-state drives with a software write-block imposed. This makes hypervised workspaces useful for forensic investigations and protects the integrity of disks that should not be altered by the software. The underlying operating system should also be program to not automatically mount or ran any software on hard disks. This way there isn’t the fear that the drive will be accidently mounted, the evidence altered, which results in potentially viable evidence being discarded. Users also have the option to encrypt virtual disks at the virtualization level. This means that the flat-file or physically mounted drives would remain encrypted, and that recording data would require the AES256 encryption keys. If the user has just typed the password, it is encrypted and held in memory by the system keyring and is unavailable to other programs. An authentication method would include typing a passphrase, entering a biometric from the system’s biometric authentication provider, or via device presence. Essentially bringing the operating system-style boot encryption with wireless device support, to virtual machine technology. In the case of device presence, the user might have to have their phone nearby, but may also be required to type their password before the workspace will boot, restore from hibernation, or resume state.
The CDROM module is used to mount physical CDROM drives inside hypervised workspaces. Virtual machines can wholly consume the hardware resource or create and share a single virtual resource. A user might share a DVD-RW drive with the guest operating system in order to write backups to a DVD-ROM. Another user might mount a disk image such as an ISO 9660 or ISO 13346 (UDF) encoded image inside the virtual drive. Besides ISO and UDF support, the module should also support raw disk images. The module should support the reverse operating of producing ISO files from CD and DVD ROMs. This is especially because older physical mediums like CD-ROM and DVD-ROM are lower capacity and being phased out of existence do to Internet streaming. Users who wish to convert non-DRM protected disks, or data backups, should be able to do so through the GUI.
The‘gpu’ module provides a default abstraction and implementation of standard video hardware. When no specific drivers are available for the video card, the system can still render in a limited video mode and feed it to the hypervisor as a“Generic Video Adapter” with 16MB or 32MB of memory. Proprietary drivers can extend this module in order to better expose the features of video hardware to the hypervisor. It means that the vendors can ship similar drivers for guests’ operating systems as physical hosts. The‘gpu’ system module, and modules that inherit from it, must implement a specific set of methods, and export virtual video hardware device that the hypervisor can attach to workspaces. The hardware signature of the device does not need to match the physical hardware but is the same as the underlying video card by default.
The USB module give users the ability to attach specific USB devices to their workspaces. The default operating system HVWS (the default GUI) has all of the system’s devices attached to it, except for the devices that the system did not have modules for. This is because a default installation of The operating system would only have one guest operating system running, the main GUI. The user has the option to create additional workspaces and assign USB devices to them. The user can also auto assign new devices based on a ruleset filter.
The‘state’ module provides access to a wealth of sensors, ACPI, BIOS, and EFI states that many devices have. For example, the system might be installed on a device that has an accelerometer or some other sensors. The device might have temperature sensors, ambient light, human presence, and other kinds of sensors that would be useful to apps inside the sandbox. Some machine exposes certain power states from the BIOS level to the operating system. If the user presses the power button on their laptop, guest operating systems should be able to respond to this event too. To give a better example, the device might be battery operated, and the user working in a workspace running a 3rd party operating system and would not see that their battery had been drained, unless the power states have been exposed to the virtual machine as well. The module provides a kind of a catch-all for integrated hardware that would find on many mobile devices. Other than that, users would need to install drivers for atypical hardware.
The Network Interface Card (NIC) module provides an abstraction for both wired and wireless networking hardware. As in other virtualization systems, it lets administrators assign network cards to specific workspaces. Here again, this can occur in two major modes.
The first is the virtual level. When the user attached a NIC to a virtual machine in ‘bridged’ mode, the card is shared between the workspace and the guest like a bus. The machine is exposed alongside the guest on the network, and it is as if the ethernet cable connected both the host and guest at the same time. These cards appear as generic interfaces of several types and use highly compatible drivers that most operating systems already have. Unless a custom sysmod was written, as well as a custom driver for the guest, the hardware would show up as a generic Ethernet or WiFi adapter, which is often all that is needed.
The second mode is physical enumeration. Just as the hypervisor can consume entire physicals hard drives, it can do this with other hardware such as a network card. The difference is that the card is solely owned by the workspace, and exposed it at a low-level, requiring the hardware vendor’s drivers to be installed on the guest OS. This means that all of the features of the card will be available to the user.
An advanced option in either mode, is to specify the VLAN number that traffic exiting the hypervisor should be tagged with. Even though two machines might be on the same Ethernet segment, the VLAN numbers provide a logical separation that is similar to subnetworks. Hosts will ignore traffic from VLANs other than the one that the host is currently on. Users can user the Virtual Configuration Manager to create advanced network configurations, and a virtual network for guest virtual machines. This takes the power of VLANs to the next level.
The RAM module allows the user to specify sizable chunks of memory to apply to each workspace. If the user has 8GB of memory total, that user might allocate a set number of MB to each machine. When this happens the RAM module allocates the memory and handles reads/writes from the hypervisor. Sane defaults and memory limits prevent overuse of system memory that would result in crashes. It is possible to specify the maximum amount of memory for each virtual machine, but execution will pause if the workspace VM is running out of memory within a certain threshold. The same concept applies for full hard disks. If a require resource runs out, or meets threshold, the execution pauses until the user frees disk space, or memory.
From the kernel perspective, any programs that try and write to sections of memory zoned to a virtualized workspace (to the hypervisor) will be denied. Sections of memory are zoned by the kernel to model its sandboxed memory. Nothing in system space should write to memory zoned for hardware space, or a virtualization sandbox. Even between running programs, if a program’s tries to read outside of its allocated memory ranges, it will be terminated, which triggers a security event.
The system also needs to have modules that handle keyboard and mouse input. Although there is already a USB interface that would work for many pointing and typing devices, user should be able to select which devices map to specific workspaces. The goal of this module is to be seamless. When the user hits Ctrl - Alt - Right / Left to switch between workspaces, the default keyboard and mouse should be mapped to the workspace that is in the foreground. If the workspace is windowed, then then input should go to it when the window has focus, that is, when the user clicks inside of it.
There are many kinds of HIDs, and they may also present a danger. The user should always have access to a list of every HID on the system and receive notifications when a HID is plugged and unplugged. Certain recent hacks have involved repurposed phones that register as“Human Input Devices”, but actually collect keystrokes under the hood. This is a privacy violation, and not the correct path forward. However, support still must exist if the user wishes to use their phone as a keyboard or mouse.
The virtual space layer 18 is where the hypervisor sits. It’s also the outer container for the operating system sandbox. Everything that is hypervised is said to be running in the sandbox, even though workspaces are technically separate from each other. A Hypervised Workspace is a type of virtual machine that can have physical hardware attached to it through the hypervisor. In some cases, the hypervisor might have to coordinate with the workspace, updating the CPU state or register values, in order to provide physical access to the hardware. Recall that the CPU and GPU communicate with each other over the Front Side Bus.
With respect to the virtual space layer 18, the NAPI bus 46 is a component of a hypervisor 48 that runs asynchronously and accepts NAPI commands from multiple virtual machines at once, and executes them in a load-balanced, or user-specific, ordering. As virtual machines with virtual hardware 50 are running in workspaces (sandbox instances), the hypervisor 48 is constantly context switching, to meet the demands of each instance. The hypervisor 48 will schedule blocks of instructions onto the CPU itself, and acts as a program quickly switching between virtual machine contexts in order to give the illusion that each virtual machine is running concurrently. Sysmods and hardmods are subscribed to these events, allowing driver vendors to write custom drivers for both The operating system, and guest operating systems. When a workspace needs to run applications with native performance, they can schedule execution time with NAPI, which returns a promise of a future resource being available.
A hypervised workspace 52 is a type of virtual machine that has a bus for native execution, and a driver system that allows physical hardware to be exposed to it. When a workspace boots, the system starts up into a virtual bios with virtual hardware that formed the bases of the virtual machine as represented by the virtual hardware 50. The hypervised worskapce 52 or GUI itself is a special type of hypervised workspace 52 the will either use CPU instruction virtualization features like VT-x and AMD-V, or it will run on the processor natively if processor virtualization technology is not available.
The virtual hardware 50 resources are exposed by system modules and supported by hardware module drivers. Sysmods export hardware that can be used in the hypervisor. In order for the hardware to be valid, the module must export a named piece of hardware that has a vendor code, product code, serial number, and type. The hypervisor 48 breaks the named devices down by type and allocates the hardware into the next free virtual card slot or bus for each workspace that requires the hardware.
As previously discussed, there are two major modes for hardware enumeration. Physical hardware enumeration and virtual hardware enumeration. Hardware that has been physically enumerated, is solely dedicated to a single Hypervised Workspace, and requires the vendor’s drivers to be installed inside the workspace. Virtually enumerated hardware is hardware that is virtualized, that is, it can be scheduled and shared between multiple workspaces. Whether the device appears as a generic device and uses a driver that was bundled with the guest operating system, a‘stock driver’, or one that requires the vendor’s driver to be installed in the guest, is up to the developer of the sysmod and hardmod drivers for The operating system.
With respect to the sandbox layer 28, a workspace 54 sits inside the virtualization sandbox. Multiple workspaces can be running simultaneously, and users can switch between them with the default hotkeys. Users can create additional workspaces, and edit which devices are connected to the workspace with the virtual configuration manager.
An application framework 56 is a part of the workspace or workspaces 54. It is technically a huge API that developers can use to build their own applications. It also defines all of the GUI components for each Apps, which allows developers to build hybrid applications that have both web and native components to get the best user experience and performance. The application framework 56 contains many components related to user experience, diagnostics, rendering, and debugging functionality, while the native API provides a performant solution to interacting with hardware. Workspaces 54 that are unaware of NAPI can continue to running software in a plain virtualization mode, but guest drivers and software that understand NAPI gives The operating system users a way to interact with The operating system from other operating systems.
One important part of building an app, is ensuring that it doesn’t have a bunch of bugs. As with other compilers, the native Pythonic compiler understands the difference between DEBUG and RELEASE modes, and will insert include a debugging symbols table, when apps are built with debug mode turned on. The debugging table give developers the ability to jump to the exact line number of the offending part of the program, and the system debugger tracks changes in memory in order to report the values of individual variables.
Besides all of the features that one would typically see, such as, performance graphs, debug levels, and the values of variables, memory locations, and CPU features. The use would also be able to intercept NAPI calls (native system calls) in order to discover the source of a problem.
In one use case, the developer might add a delay to NAPI, which means that any system calls made, are purposely delayed before execution. This gives the developer time to see what is occurring in in slow motion. In an ordinary circumstance, the NAPI calls would have been executed in a fraction of a second, but with NAPI delay turned on, the developer can use the graphical user interface to watch commands as they are being executed, and the results of each call, as results are returned. This is possible because the application framework is nested inside the virtual machine container, and from the perspective of the VM, events are still being executed in sequential order (causal time) However, from the perspective of the developer, these calls are scheduled, displayed on the screen, and are executed after the delay timer reaches zero. To make this better, there are columns for each thread that the application starts, which further allows the developer to visualize NAPI calls that would ordinarily be executed in parallel. The developer can locate the exact thread that made the offending NAPI call(s), set a breakpoint, and make edits to the program at that precise spot.
Applications 58 are supposed to be really easy to build, which is why common web languages were chosen as the default languages for building GUIs in The operating system. Anyone who understand HTML and CSS, should be able to start building a GUI for their app, and anyone who understands Python, should be able to pick up the native app language.
Developers can write apps that use the application framework and native APIs without worrying about changing their compilation toolchain, because the native and non native API calls are both made in the same compiled Pythonic language. The application framework is divided into both native and non-native calls to make it apparent which calls are happening outside of the sandbox, and which calls are being made inside the sandbox. The separation allows developers to choose and get the most benefit out of the APIs combined.
As for privileged calls, most of these calls are native because most privileges involve making requests outside the application sandbox. For example, the developer might open a socket with the application framework, and then go and open a RAW socket with the native API. The socket opened inside the sandbox can only interact with information coming in/out of the sandbox, the native socket is privileged, and can intercept traffic coming out of the underlying machine. This kind of functionality would trigger re-authentication before the system would allow it.
Another example, is that the application framework allows apps to create files in within their sandbox, and this is usually enough for most developers. However, calls can be made to the native framework that would allow for low-level disk access and drive listings, beneath the hypervisor. This means that developers have the option to control the entire underlying operating system from inside the virtual machine sandbox itself.
In terms of security, the integrity of the underlying operating system relies on the authentication subsystem. As hypervised workspaces make native calls, or certain privileged calls, the system’s arbiters handle events going across the NAPI Bus, and allow or deny individual calls, and the Security Manager keeps logs of patterns of system calls in order to detect malicious patterns. That way, even if the user does allow a malicious program, and mistakenly re-authenticates, there is a second line of defense within the API itself that will find malicious patterns of system calls. Calls and patterns of calls can be persisted in the event chain for extra security, and to provide a complete execution trail for each workstation utilizing the operating system 10.
The operating system 10 may have a web engine 60 and a certificate store 62. The certificate store 62 is where the operating system keeps PKI certificates related to TLS/SSL (websites) and network-level signatures. This provides a way for the operating system to verify the connection authenticity of remote hosts. One aspect of the operating system is its ability for hosts to distribute resources among member machines, and for machines to elect a leader in a peer-to-peer topology as part of a distributed system. One feature involves asking the rest of the domain if the certificate is a domain certificate, and how many machines agree that the certificate is legitimate. Since The operating system domains are not geography locked, unless required for legal reasons, hosts can be anywhere in the world, which provides the domain with additional perspectives about any given server’s cryptographic signatures.
To give an example of this, one operating system host has an outdated certificate store, and is wrongly forming TLS connections with a remote server. After joining the domain, the machine will ask the other machines in the domain about the validity of the same TLS/SSL certificate, and will find that its valid status had been revoked by a trusted authority. This is one great benefit of joining a domain, machines will collaborate together in order to take preventive security countermeasures and react to security issues. This also means that the security of an individual machine is not just from the perspective of that machine, but from the others around it.
A package manager 64 allows users to install applications found in both a store-like interface or by hand. In either case, the user is shown a summary of what is to be installed and removed before each transaction. If the app contains an unsigned service, the user will answer a re-authentication dialog. The package manager 64 keeps track of each applications permission requirements and has an API for breaking installed applications down by permission category. The API has methods that make it really easy to spot malware, and aides the user in identifying apps that have more privileges/access than needed.
Each application has a JSON metadata associated with it that is returned as a Pythonic dictionary for compiled or interpreted programs accessing the API. Users can write scripts to install programs or build compiled software tools for working with package metadata. The packages and metadata are signed by the package distributor and repository. This forms a public -key infrastructure between developers and the repositories pushing their software.
System administrators can subscribe to additional repositories, which makes software from those repositories installable in both the CLI and GUI and adds the repository’ s public- key certificates to the system certificate store. Domain policy might forbid adding additional repositories or lock the machine into using a single domain repository.
A native framework 66 is a major component of the application framework 56. It is separate because the native framework 66 involves transactions that work outside of the sandboxed area and are used to ran the operating system from inside the sandbox. In terms of permissions required to interact with NAPI, the operating system demands that each application or‘app’ demonstrate user, system, or hardware privileges, depending on the situations. In each case, users may perform many transactions for opening files, TCP sockets, and querying information from services. However, making edits to the hypervisor, controlling the running status of other workspaces, opening RAW sockets, putting network interface cards in monitor mode, and controlling the running status of system services, requires system-level privileges. Altering anything in hardware space requires prior authorization. Users are not meant to log in as the Hardware User, but it is possible if the user answers a low-level re-authentication dialog and demonstrates hardware ownership. Typically, this would not happen unless the system could not boot into a graphical session (for debugging) Apps can be granted continual NAPI access through services. Services are a system space concept, for a continually running background process, that may or may not start with the computer. In order to make privileged NAPI calls, the system must have the user re- authenticate, and their user account must be marked with system or hardware privileges. In either case, the user might start the app as a normal user, and then answer a re-authentication dialog from the system/hardware arbiters, before being granted access to a greater privilege tier.
Users can create sockets, mutexes, and modify owned files. Users can also modify permissions on owned resources, mount external/removable media, and view running processes and open sockets, view firewall rules and domain policies, and collaborate with domain users. In general, can perform most common tasks on the system. There is a healthy amount of having users answer dialogs for security reasons, but this is must.
A system space tier allows users to alter firewall rules, access more parts of the filesystem related to system space, ownership of files owned by basic user accounts, opening raw sockets with the ability to monitor network traffics, add/removing trusted certificates, and install system modules and extensions to the hypervisor. Hardware users can flash hardware, update the BIOS, flash GPU firmware, and install drivers, install hardware modules and view and clear logs about devices that have ever been plugged into the system, access every file on the system or on unencrypted removable media, or change the encryption keys of the primary volume, change the boot menu to include other operating systems, or replace it, swap the default kernel, and add cryptographic modules, or alternate sources of entropy.
From an attacker’ s perspective, this means that there are three tiers of persistence in the operating system 10:
User Sandbox System Space Hardware Space
In one embodiment, the goal is to follow the least privilege philosophy, while providing as many features to normal users as possible. Most users will find everything they need with a standard user account, including the ability to create their own hypervised workspaces 52, based on resources that have been allocated to them by a system administrator· System administrators can manage user accounts, take ownership of normal user’s workspaces and files. Administrators can control most aspects of the system that would not break the installation or allow for permanent persistence of malware. Ordinary users are allowed to install apps bundled with services, provided those services are signed. Unsigned services require authorization by a system user or administrator, re-authentication, and timed countdown before the install can be granted. This is to warn the person that the program will try and start at boot prevent a bot from clicking ‘okay’ on the dialog. Furthermore, NAPI commands, and the execution of the querying program, is blocked until the user grants/denies the questions. This prevents malicious developers from using low- level dialogs to distract the user from being able to stop another NAPI transaction. Imagine if a dialog popped up, and meanwhile the program was sending your files to another computer, all while you were deciding whether to grant access in the first place.
With respect to the applications 58,‘Apps’ are programs that ran in the operating system sandbox by default and may escalate privileges into system or hardware space with the permission of a system or hardware user, and one or more security demonstrations. Stock apps include applications that every operating system should have like a calculator, file-browser, terminal, clock, image viewer, media player, audio recorder, flashlight app, AM/FM radio, screenshot tool, and otherwise. These applications are all bundled with the operating system and take up very little resources. Stock apps follow the same rules as normal applications, except that they cannot be uninstalled without system privileges. Stock apps are not meant to be removed by ordinary users, as they are core to The operating system experience, or critical. For example, if the user were to uninstall the default software keyboard, they might be unable to type on a touchscreen device and would be required to use a keyboard. It might not sound critical, but phone users may not be able to attach a keyboard.
These apps are all core to the operating system experience and displayed alongside other applications that the user has installed. If the user has system permissions, they should be able to remove these applications, and swap them out for others. However, the requirement of administrator permissions it justified, because users might remove their terminal or file browser, which would present a problem. However, users can spin up new hypervised workspaces 52 in order to get these apps back, as they are found inside the sandbox. If the user were to break the graphical components of the operating system with system-level permissions, then those components would need to be reinstalled from the hardware space command interpreter.
Hypervised workspaces 52 are a concept similar to a virtual machine, except with a native bus and interface for running low-level operating system operations. A system space arbiter 53 and the hardware space arbiter 41 intercept NAPI events as they flow across the BUS in order to ask the user for re-authentication or check the context is correct for the commands to be ran. That is, more generally, each of the native interface as represented by the native framework 66, system space arbiter 53, the hardware space arbiter 41, and the boot mode arbiter 27, as appropriate, are configured to intercept the dispatched event for authentication and context check. Workspaces 54 represent sandboxed components of the operating system with the graphical user interface, application framework, and software keyboard.
Hypervised workspaces 52 are different from virtual machines because they include a native interface for controlling the entire underlying operating system. NAPI allows users to authenticate inside the sandbox in order to controller the operating system underneath. The NAPI commands are broken up by tier in order to order to provide the most granularity and permissions options for users. Normal user accounts without system or hardware tier privilege endorsements, can’t run any NAPI commands that would break the system, or pose a privacy risk to other users. For example, the user could install tasks that run on a schedule, but not system services, as those start at boot, and will affect other users of the system.
The hypervisor 48 has a BUS called the NAPI Bus 46, which runs on another thread, and accepts NAPI system calls from workspaces that dispatch NAPI events onto the bus. The events are processed asynchronously by the lower layers of the operating system, and a result is returned to the requesting program after the API call has been executed. These calls are translated by the system and hardware space modules into low-level functionality and logged by the Security Manager hardmod. When a synchronous resource is requested, the result of the call might be allocated in memory or file descriptor that the calling workspace can use to perform operations outside of the sandbox. Whether these native operations are privileged or not, the resource is allocated and exposed to the hypervisor and workspace. The calling program, running nested inside the workspace, can work with the resource through the native framework 66 at speed. This means that certain hardware devices and software that cannot be virtualized, can be wholly consumed by a workspace. Hardware or software components that support scheduling, or the sharing of resources between workspaces, can be exposed to the hypervisor attached to multiple workspaces simultaneously.
The order of execution or performance of the methods and data flows illustrated and described herein is not essential, unless otherwise specified. That is, elements of the methods and data flows may be performed in any order, unless otherwise specified, and that the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element are all possible sequences of execution. While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is, therefore, intended that the appended claims encompass any such modifications or embodiments.