Movatterモバイル変換


[0]ホーム

URL:


US11494220B2 - Scalable techniques for data transfer between virtual machines - Google Patents

Scalable techniques for data transfer between virtual machines
Download PDF

Info

Publication number
US11494220B2
US11494220B2US16/810,400US202016810400AUS11494220B2US 11494220 B2US11494220 B2US 11494220B2US 202016810400 AUS202016810400 AUS 202016810400AUS 11494220 B2US11494220 B2US 11494220B2
Authority
US
United States
Prior art keywords
virtual machine
virtual
data
shared
virtual memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/810,400
Other versions
US20200201668A1 (en
Inventor
Ben-Zion Friedman
Eliezer Tamir
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel CorpfiledCriticalIntel Corp
Priority to US16/810,400priorityCriticalpatent/US11494220B2/en
Publication of US20200201668A1publicationCriticalpatent/US20200201668A1/en
Application grantedgrantedCritical
Publication of US11494220B2publicationCriticalpatent/US11494220B2/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Scalable techniques for data transfer between virtual machines (VMs) are described. In an example embodiment, an apparatus may include circuitry and memory storing instructions for execution by the circuitry to assign each one of a plurality of shared virtual memory spaces to a respective one of a plurality of virtual machines, wherein a first shared virtual memory space of the plurality of shared virtual memory spaces is assigned to a first virtual machine of the plurality of virtual machines, write, by the first virtual machine to the first shared virtual memory space, data to be provided to a second virtual machine of the plurality of virtual machines, and read, by the second virtual machine, the data in the first shared virtual memory space.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of, claims the benefit of and priority to previously filed U.S. patent application Ser. No. 14/998,361 filed Dec. 24, 2015, entitled “Scalable techniques for data transfer between virtual machines”, which is hereby incorporated by reference in its entirety.
This application relates to International Patent Application entitled “SCALABLE TECHNIQUES FOR DATA TRANSFER BETWEEN VIRTUAL MACHINES,” International Patent Application Serial Number PCT/US16/63692, filed Nov. 23, 2016. The contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD
Embodiments herein generally relate to virtual machine management, memory allocation, input/output (I/O), and networking.
BACKGROUND
In a variety of contexts, it may be desirable that a host be configured to support the transfer of data between virtual machines (VMs) running on that host. For example, providing inter-VM data transfer support may enable the implementation of a security appliance VM that inspects changes to filesystem data and interposes itself between a client VM and one or more storage resources, such as local direct-attached storage, network-attached storage (NAS), and/or storage area network (SAN) storage resources. Such a security appliance VM might be configured, for example, to prevent malware from being loaded from storage and/or to prevent the client VM from storing known malicious content to the filesystem.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an embodiment of a first operating environment.
FIG. 2 illustrates an embodiment of a second operating environment.
FIG. 3 illustrates an embodiment of a third operating environment.
FIG. 4 illustrates an embodiment of an assignment scheme.
FIG. 5 illustrates an embodiment of an apparatus.
FIG. 6 illustrates an embodiment of a logic flow.
FIG. 7 illustrates an embodiment of a storage medium.
FIG. 8 illustrates an embodiment of a computing architecture.
FIG. 9 illustrates an embodiment of a communications architecture.
DETAILED DESCRIPTION
Various embodiments may be generally directed to scalable techniques for data transfer between virtual machines (VMs). In an example embodiment, an apparatus may comprise circuitry, a virtual machine management component for execution by the circuitry to define a plurality of public virtual memory spaces and assign each one of the plurality of public virtual memory spaces to a respective one of a plurality of VMs including a first VM and a second VM, and a virtual machine execution component for execution by the circuitry to execute a first virtual machine process corresponding to the first VM and a second virtual machine process corresponding to the second VM, the first virtual machine process to identify data to be provided to the second VM by the first VM and provide the data to the second VM by writing to a public virtual memory space assigned to the first VM. Other embodiments are described and claimed.
Various embodiments may comprise one or more elements. An element may comprise any structure arranged to perform certain operations. Each element may be implemented as hardware, software, or any combination thereof, as desired for a given set of design parameters or performance constraints. Although an embodiment may be described with a limited number of elements in a certain topology by way of example, the embodiment may include more or less elements in alternate topologies as desired for a given implementation. It is worthy to note that any reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrases “in one embodiment,” “in some embodiments,” and “in various embodiments” in various places in the specification are not necessarily all referring to the same embodiment.
FIG. 1 illustrates an example of anoperating environment100 that may be representative of various embodiments. Inoperating environment100,circuitry102 may run a plural number N of virtual machines108-1 to108-N. In some embodiments, each of virtual machines108-1 to108-N may comprise a separate respective operating system (OS) running oncircuitry102. In various embodiments,circuitry102 may comprise circuitry of a processor or logic device. In some embodiments,circuitry102 may be communicatively coupled withmemory104, which may generally comprise machine-readable or computer-readable storage media capable of storing data. In various embodiments,circuitry102 may be communicatively coupled with some or all ofmemory104 via a bus110. In some embodiments, some or all ofmemory104 may be included on a same integrated circuit ascircuitry102. In various embodiments, some or all ofmemory104 may be disposed on an integrated circuit or other medium, for example a hard disk drive, that is external to the integrated circuit ofcircuitry102. The embodiments are not limited in this context.
In some embodiments, ahost106 may generally be responsible for creating and managing virtual machines that are implemented usingcircuitry102. In various embodiments,host106 may comprise a host OS, and each of virtual machines108-1 to108-N may comprise a respective guest OS running inside that host OS. In some embodiments,host106 may comprise a hypervisor. In various embodiments,host106 may generally be responsible for allocating memory resources for use by virtual machines108-1 to108-N. In some embodiments,host106 may allocate memory resources in accordance with a virtual memory scheme. In various embodiments, according to such a virtual memory scheme,host106 may associate a set ofvirtual memory resources112 with a set ofphysical memory resources114 comprised inmemory104. In some embodiments,host106 may map virtual memory addresses that correspond tovirtual memory resources112 to physical memory addresses that correspond tophysical memory resources114. In various embodiments,host106 may maintain memory mapping information116 that identifies the mappings that it has defined between particular virtual memory resources and particular physical memory resources. In some embodiments,host106 may implement a paged virtual memory scheme, according to which it may allocatevirtual memory resources112 in units of virtual memory pages. In various such embodiments, memory mapping information116 may be comprised in a page table that identifies mappings between pages of virtual memory and particularphysical memory resources114. The embodiments are not limited in this context.
In some embodiments,host106 may allocate respective virtual memory spaces118-1 to118-N to virtual machines108-1 to108-N. In various embodiments, each of virtual memory spaces118-1 to118-N may comprise a respective subset of thevirtual memory resources112 thathost106 may associate withphysical memory resources114. In some embodiments in whichhost106 implements a paged virtual memory scheme, each of virtual memory spaces118-1 to118-N may comprise a respective set of one or more pages of virtual memory. In various embodiments, for each of virtual memory spaces118-1 to118-N, memory mapping information116 may identify a respective physical memory space120-1 to120-N. In some embodiments, each of physical memory spaces120-1 to120-N may comprise a set of physical memory resources that correspond to the set of virtual memory resources comprised in the virtual memory space that maps to that physical memory space. The embodiments are not limited in this context.
In various embodiments,host106 may generate and/or maintainmemory allocation information122. In some embodiments,memory allocation information122 may generally comprise information thathost106 may use to track the various virtual memory spaces that it may define and/or to track the various virtual machines to which it may assign such virtual memory spaces. In some embodiments,memory allocation information122 may include information indicating the respective particular sets ofvirtual memory resources112 comprised in each of virtual memory spaces118-1 to118-N. In various embodiments,memory allocation information122 may include information indicating the respective virtual machines108-1 to108-N to which each of virtual memory spaces118-1 to118-N has been assigned. In some embodiments, each of virtual machines108-1 to108-N may only be permitted to access virtual memory resources comprised within its respective assigned virtual memory space as specified bymemory allocation information122. The embodiments are not limited in this context.
It is worthy of note that in various embodiments,circuitry102 may include circuitry of multiple devices. For example, in some embodiments,circuitry102 may comprise circuitry of multiple processors or logic devices. In various embodiments, a given virtual machine may run on more than one such processor or logic device at once. In some embodiments in whichcircuitry102 is implemented using circuitry of multiple devices, those multiple devices may be substantially collocated. For example, in various embodiments,circuitry102 may comprise circuitry of multiple processors of a same server. In other embodiments,circuitry102 may comprise circuitry of respective processors/logic devices of multiple different servers. In some such embodiments, virtual machines running on the respective processors/logic devices of the various servers may be networked using network connectivity between those servers. The embodiments are not limited in this context.
FIG. 2 illustrates an example of an operatingenvironment200 that may be representative of various embodiments. Inoperating environment200, virtual memory spaces218-1 and218-2 may be defined that comprise respective sets ofvirtual memory resources212. In some embodiments, the set ofvirtual memory resources212 comprised in virtual memory space218-1 may map to a set ofphysical memory resources214 comprised in a physical memory space220-1. In various embodiments, the set ofvirtual memory resources212 comprised in virtual memory space218-2 may map to a set ofphysical memory resources214 comprised in a physical memory space220-2. In some embodiments, virtual memory space218-1 may be assigned to a virtual machine208-1, and virtual memory space218-2 may be assigned to a virtual machine208-2. In various embodiments, virtual machine208-1 may be permitted to access virtual memory resources comprised in virtual memory space218-1 but not virtual memory resources comprised in virtual memory space218-2, and virtual machine208-2 may be permitted to access virtual memory resources comprised in virtual memory space218-2 but not virtual memory resources comprised in virtual memory space218-1.
In some embodiments, virtual machine208-1 may elect to writedata224 to memory. In various embodiments, virtual machine208-1 may writedata224 to virtual memory locations comprised in virtual memory space218-1, and as a result,data224 may be stored in physical memory resources comprised within physical memory space220-1. In some embodiments, it may be desirable that virtual machine208-2 be provided withdata224. However, in various embodiments, virtual machine208-2 may not be permitted to access virtual memory resources comprised in virtual memory space218-1, and thus may be unable to retrievedata224 from physical memory space220-1. In some such embodiments, virtual machine208-1 may not be permitted to access virtual memory resources comprised in virtual memory space218-2, and thus may be unable to storedata224 within physical memory resources of the physical memory space220-2 that is accessible to virtual machine208-2 via virtual memory space218-2. The embodiments are not limited to this example.
FIG. 3 illustrates an example of an operatingenvironment300 that may be representative of various embodiments. More particularly, operatingenvironment300 may be representative of the implementation of a mailbox-based scheme for supporting data transfer between virtual machines. Inoperating environment300, respective sets of virtual memory resources may be designated for use as mailboxes326-1 and326-2. In some embodiments, mailbox326-1 may comprise a virtual memory space that is specifically designated for use by virtual machine208-1 to provide data to virtual machine208-2. In various embodiments, mailbox326-2 may comprise a virtual memory space that is specifically designated for use by virtual machine208-2 to provide data to virtual machine208-1. In some embodiments, only virtual machine208-1 may be permitted to write to mailbox326-1, and only virtual machine208-2 may be permitted to read any data that virtual machine208-1 may write to mailbox326-1. In various embodiments, only virtual machine208-2 may be permitted to write to mailbox326-2, and only virtual machine208-1 may be permitted to read any data that virtual machine208-2 may write to mailbox326-2. The embodiments are not limited in this context.
In some embodiments, in order to providedata224 to virtual machine208-2, virtual machine208-1 may writedata224 to virtual memory resources comprised in mailbox326-1. In various embodiments, the virtual memory resources of mailbox326-1 may map to physical memory resources comprised in aphysical memory space320. In some embodiments, when virtual machine208-1writes data224 to virtual memory resources comprised in mailbox326-1,data224 may be stored in physical memory resources comprised withinphysical memory space320. In various embodiments, the virtual memory resources of mailbox326-2 may map to physical memory resources comprised in a physical memory space other than physical memory space220-1, physical memory space220-2, orphysical memory space320. The embodiments are not limited in this context.
FIG. 4 illustrates an example of anassignment scheme400.Assignment scheme400 may be representative of a generalization of the mailbox-based scheme discussed above in reference to operatingenvironment300 ofFIG. 3. According toassignment scheme400, a pool of mailboxes is defined, each of which may correspond to a different respective set of virtual memory resources. The pool of mailboxes includes a respective dedicated mailbox for each possible combination of data transferor and data transferee with respect to a pool of N virtualmachines VM #1 to VM #N. Each virtual machine is assigned a set of N−1 mailboxes, to each of which it may write data to be provided to a respective one of the N−1 other virtual machines in the pool. Each virtual machine is able to read data from each of another set of N−1 mailboxes, each of which may be written to by a respective one of the N−1 other virtual machines in the pool in order to provide data to that virtual machine.
Each row of mailboxes inFIG. 4 comprises the mailboxes to which a given virtual machine is able to write. For example, the first row comprises the N−1 mailboxes to whichVM #1 is able to write, the second row comprises the N−1 mailboxes to whichVM #2 is able to write, and so forth. Each column of mailboxes inFIG. 4 comprises the mailboxes from which a given virtual machine is able to read. For example, the first column comprises the N−1 mailboxes from whichVM #1 is able to read, the second row comprises the N−1 mailboxes from whichVM #2 is able to read, and so forth. The pool of mailboxes inFIG. 4 is numbered in ascending order, and from left to write in row-wise fashion. For example, the first row comprisesmailboxes1 to N−1, the second row comprises mailboxes N to 2*(N−1), and so forth. The last mailbox in the pool—which is highlighted aselement402—is mailbox N*(N−1). Thus, a total of N*(N−1) mailboxes are required to implementassignment scheme400 for a pool of N virtual machines. As such, according toassignment scheme400, the number of required mailboxes increases as the square of the number of virtual machines in the pool.
In some embodiments, each mailbox inFIG. 4 may correspond to a respective virtual memory buffer of size M. In various embodiments, the total amount of virtual memory space MTOTthat is required to house the various mailboxes of the mailbox pool may be equal to M*N*(N−1), and thus MTOTmay increase in proportion to the square of the number of virtual machines N. In various embodiments, there may be a minimum permitted value of the buffer size M. For example, in some embodiments, the minimum permitted buffer size may be 4 kilobytes. In some embodiments, for larger values of N, the value of MTOTmay exceed the amount of virtual memory space that can be allocated to the mailbox pool without negatively impacting performance. In various embodiments,assignment scheme400 may thus not be feasibly scalable for implementation in conjunction with larger virtual machine pools.
FIG. 5 illustrates an example of an apparatus500 that may implement one or more scalable techniques for data transfer between virtual machines in some embodiments. According to various such techniques, a pool of N “outboxes” may be defined for a pool of N virtual machines, and each of the N outboxes may be assigned to a respective one of the N virtual machines. As shown inFIG. 5, apparatus500 comprises multipleelements including circuitry502,memory504, andstorage544. The embodiments, however, are not limited to the type, number, or arrangement of elements shown in this figure.
In some embodiments, apparatus500 may comprisecircuitry502.Circuitry502 may be arranged to execute one or more software or firmware implemented modules or components, which may include a virtualmachine management component506 and a virtualmachine execution component507. In various embodiments,circuitry502 may comprise circuitry of a processor or logic device, such as a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, an x86 instruction set compatible processor, a processor implementing a combination of instruction sets, a multi-core processor such as a dual-core processor or dual-core mobile processor, or any other microprocessor or central processing unit (CPU). In some embodiments,circuitry502 may comprise circuitry of a dedicated processor, such as a controller, a microcontroller, an embedded processor, a chip multiprocessor (CMP), a co-processor, a digital signal processor (DSP), a network processor, a media processor, an input/output (I/O) processor, a media access control (MAC) processor, a radio baseband processor, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic device (PLD), and so forth. In various embodiments,circuitry502 may be implemented using any of various commercially available processors, including—without limitation—AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and secure processors; IBM® and Motorola® DragonBall® and PowerPC® processors; IBM and Sony® Cell processors; Intel® Atom®, Celeron®, Core (2) Duo®, Core i3, Core i5, Core i7, Itanium®, Pentium®, Xeon®, Xeon Phi® and XScale® processors; and similar processors. The embodiments are not limited in this context.
In various embodiments, apparatus500 may comprise or be arranged to communicatively couple withmemory504.Memory504 may be implemented using any machine-readable or computer-readable media capable of storing data, including both volatile and non-volatile memory. For example,memory504 may include read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, or any other type of media suitable for storing information. It is worthy of note that some portion or all ofmemory unit504 may be included on the same integrated circuit ascircuitry502, or alternatively some portion or all ofmemory504 may be disposed on an integrated circuit or other medium, for example a hard disk drive, that is external to the integrated circuit ofcircuitry502. Althoughmemory504 is comprised within apparatus500 inFIG. 5,memory504 may be external to apparatus500 in some embodiments. The embodiments are not limited in this context.
In various embodiments, apparatus500 may comprisestorage544.Storage544 may be implemented as a non-volatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an attached storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device. In embodiments,storage544 may include technology to increase the storage performance enhanced protection for valuable digital media when multiple hard drives are included, for example. Further examples ofstorage544 may include a hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of DVD devices, a tape device, a cassette device, or the like. The embodiments are not limited in this context.
In some embodiments, virtualmachine execution component507 may be executed bycircuitry502 to run one or more virtual machines. In various embodiments, virtualmachine execution component507 may be executed bycircuitry502 to instantiate and execute a respective virtual machine process for each such virtual machine. In the example ofFIG. 5, virtualmachine execution component507 may execute a virtual machine process508-1 that corresponds to a first virtual machine and a virtual machine process508-2 that corresponds to a second virtual machine. In some embodiments, virtual machine process508-1 may correspond to virtual machine208-1 ofFIGS. 2 and 3, and virtual machine process508-2 may correspond to virtual machine208-2 ofFIGS. 2 and 3. The embodiments are not limited in this context.
In various embodiments, virtualmachine management component506 may generally be responsible for allocating memory resources for use by the virtual machine processes that may be instantiated and executed by virtualmachine execution component507. In some embodiments, virtualmachine management component506 may allocate memory resources in accordance with a virtual memory scheme. In various embodiments, according to such a virtual memory scheme, virtualmachine management component506 may associate a set ofvirtual memory resources512 with a set ofphysical memory resources514 comprised inmemory504. In some embodiments, virtualmachine management component506 may map virtual memory addresses that correspond tovirtual memory resources512 to physical memory addresses that correspond tophysical memory resources514. In various embodiments, virtualmachine management component506 may maintainmemory mapping information516 that identifies the mappings that it has defined between particular virtual memory resources and particular physical memory resources. In some embodiments, virtualmachine management component506 may implement a paged virtual memory scheme, according to which it may allocatevirtual memory resources512 in units of virtual memory pages. In various such embodiments,memory mapping information516 may be comprised in a page table that identifies mappings between pages of virtual memory and particularphysical memory resources514. The embodiments are not limited in this context.
In some embodiments, virtualmachine management component506 may define a plurality of private virtual memory spaces, and may assign each one of the plurality of private virtual memory spaces to a respective one of a plurality of virtual machines. In various embodiments, each private virtual memory space may be accessible only to the virtual machine to which it is assigned. In some embodiments, each such private virtual memory space may comprise a respective subset of thevirtual memory resources512 that virtualmachine management component506 may associate withphysical memory resources514. In various embodiments in which virtualmachine management component506 implements a paged virtual memory scheme, each private virtual memory space may comprise a respective set of one or more pages of virtual memory. In some embodiments, for each private virtual memory space,memory mapping information516 may identify a respective physical memory space. In various embodiments, each such physical memory space may comprise a set of physical memory resources that correspond to the set of virtual memory resources comprised in the virtual memory space that maps to that physical memory space. The embodiments are not limited in this context.
In some embodiments, virtualmachine management component506 may define a privatevirtual memory space518 and assign it to the virtual machine corresponding to virtual machine process508-1. In various embodiments, privatevirtual memory space518 may only be accessible to virtual machine process508-1. In some embodiments, the virtual memory resources comprised in privatevirtual memory space518 may map to physical memory resources comprised in aphysical memory space520. In various embodiments,memory mapping information516 may include information indicating that privatevirtual memory space518 corresponds tophysical memory space520. The embodiments are not limited in this context.
In some embodiments, virtualmachine management component506 may define a plurality of public virtual memory spaces, and may assign each one of the plurality of public virtual memory spaces to a respective one of a plurality of virtual machines. In various embodiments, each such public virtual memory space may comprise a virtual memory space for use by the virtual machine to which it is assigned as an “outbox” in which to store data to be provided to one or more other virtual machines. In some embodiments, each such public virtual memory space may be writable by the virtual machine to which it is assigned, and may be readable by each other one of the plurality of virtual machines. In various embodiments, each such public virtual memory space may comprise a respective subset of thevirtual memory resources512 that virtualmachine management component506 may associate withphysical memory resources514. In some embodiments in which virtualmachine management component506 implements a paged virtual memory scheme, each public virtual memory space may comprise a respective set of one or more pages of virtual memory. In various embodiments, for each public virtual memory space,memory mapping information516 may identify a respective physical memory space. In some embodiments, each such physical memory space may comprise a set of physical memory resources that correspond to the set of virtual memory resources comprised in the virtual memory space that maps to that physical memory space. The embodiments are not limited in this context.
In various embodiments, virtualmachine management component506 may define a publicvirtual memory space526 and assign it to the virtual machine corresponding to virtual machine process508-1. In some embodiments, publicvirtual memory space526 may comprise a virtual memory space for use by the virtual machine corresponding to virtual machine process508-1 as an outbox in which to store data to be provided to one or more other virtual machines. In various embodiments, publicvirtual memory space526 may be writable be virtual machine process508-1 and may be readable by virtual machine process508-2. In some embodiments, the virtual memory resources comprised in publicvirtual memory space526 may map to physical memory resources comprised in aphysical memory space528. In various embodiments,memory mapping information516 may include information indicating that publicvirtual memory space526 corresponds tophysical memory space528. The embodiments are not limited in this context.
In some embodiments, virtualmachine management component506 may generate and/or maintainmemory allocation information522. In some embodiments,memory allocation information522 may generally comprise information that virtualmachine management component506 may use to track the various private and public virtual memory spaces that it may define and/or to track the various virtual machines to which it may assign such virtual memory spaces. In various embodiments,memory allocation information522 may include information indicating the respective particular sets ofvirtual memory resources512 comprised in privatevirtual memory space518 and publicvirtual memory space526. In some embodiments,memory allocation information522 may include information indicating that privatevirtual memory space518 and publicvirtual memory space526 have been assigned to the virtual machine corresponding to virtual machine process508-1. The embodiments are not limited in this context.
In various embodiments, virtual machine process508-1 may identify data524 that is to be provided to the virtual machine corresponding to virtual machine process508-2 by the virtual machine corresponding to virtual machine process508-1. In some embodiments, virtual machine process508-1 may retrieve data524 fromphysical memory space520. In various embodiments, virtual machine process508-1 may provide data524 to the virtual machine corresponding to virtual machine process508-2 by writing to publicvirtual memory space526.
In some embodiments, virtual machine processes508-1 and508-2 may correspond to two virtual machines among a pool of a larger number of virtual machines. In such embodiments, publicvirtual memory space526 may be readable both by virtual machine process508-2 and by virtual machine processes corresponding to other virtual machines in the pool. In various embodiments, in order to preserve the security of data524, virtual machine process508-1 may encrypt data524 to before writing to publicvirtual memory space526. In some embodiments, virtual machine process508-1 may encrypt data524 using anencryption key530 in order to obtain encrypted data532, and may write encrypted data532 to publicvirtual memory space526.
In various embodiments,encryption key530 may comprise a symmetric encryption key. In some such embodiments,encryption key530 may comprise an Advanced Encryption Standard (AES) symmetric encryption key. In various embodiments,encryption key530 may comprise a dedicated encryption key for use in encryption and decryption of data being provided to the virtual machine corresponding to virtual machine process508-2 by the virtual machine corresponding to virtual machine process508-1. In some embodiments,encryption key530 may comprise an asymmetric encryption key. In various embodiments,encryption key530 may comprise a public key of a private/public key pair. In some such embodiments,encryption key530 may comprise a dedicated key for use in encryption of data being provided to the virtual machine corresponding to virtual machine process508-2. In various embodiments,encryption key530 may comprise a public key selected by the virtual machine corresponding to virtual machine process508-2. In some such embodiments, virtualmachine management component506 may publishencryption key530 on behalf of the virtual machine corresponding to virtual machine process508-2. The embodiments are not limited in this context.
In various embodiments, virtual machine process508-2 may retrieve encrypted data532 from publicvirtual memory space526 and decrypt encrypted data532 using anencryption key536. In some embodiments, virtualmachine management component506 may generate a shareddata notification534 to notify the virtual machine corresponding to virtual machine process508-2 that publicvirtual memory space526 contains encrypted data532 to be provided to that virtual machine. In various such embodiments, virtual machine process508-2 may retrieve and decrypt encrypted data532 in response to shareddata notification534. In some embodiments, shareddata notification534 may identify one or more virtual memory pages comprising encrypted data532. In various embodiments, shareddata notification534 may identify the virtual machine corresponding to virtual machine process508-1 as the source of encrypted data532. In some embodiments, shareddata notification534 may identify the virtual machine corresponding to virtual machine process508-2 as the intended recipient of encrypted data532. The embodiments are not limited in this context.
In various embodiments,encryption key536 may comprise a symmetric encryption key. In some such embodiments,encryption key536 may comprise an AES symmetric encryption key. In various embodiments,encryption key536 may comprise a dedicated encryption key for use in encryption and decryption of data being provided to the virtual machine corresponding to virtual machine process508-2 by the virtual machine corresponding to virtual machine process508-1. In some embodiments,encryption key536 may comprise a same symmetric encryption key asencryption key530. In various embodiments,encryption key536 may comprise an asymmetric encryption key. In some embodiments,encryption key536 may comprise a private key of a private/public key pair. In various such embodiments,encryption key536 may comprise a private key of a private/public key pair with respect to whichencryption key530 comprises the public key. In some embodiments,encryption key536 may comprise a dedicated key for use in decryption of encrypted data being provided to the virtual machine corresponding to virtual machine process508-2. The embodiments are not limited in this context.
It is worthy of note that in some embodiments in whichencryption keys530 and536 comprise a same symmetric encryption key, asymmetric encryption may be used in conjunction with providing that symmetric encryption key to virtual machine process508-2. For example, in various embodiments, virtual machine process508-1 may randomly select a symmetric encryption key asencryption key530 and may encryptencryption key530 using a public key of a private/public key pair to obtain an encrypted symmetric encryption key. In such embodiments, virtual machine process508-2 may decrypt the encrypted symmetric encryption key using a public key of the private/public key pair, and may identify the symmetric encryption key asencryption key536. The embodiments are not limited to this example.
Operations for the above embodiments may be further described with reference to the following figures and accompanying examples. Some of the figures may include a logic flow. Although such figures presented herein may include a particular logic flow, it can be appreciated that the logic flow merely provides an example of how the general functionality as described herein can be implemented. Further, the given logic flow does not necessarily have to be executed in the order presented unless otherwise indicated. In addition, the given logic flow may be implemented by a hardware element, a software element executed by a processor, or any combination thereof. The embodiments are not limited in this context.
FIG. 6 illustrates an example of alogic flow600 that may be representative of the implementation of one or more of the disclosed scalable techniques for data transfer between virtual machines according to various embodiments. For example,logic flow600 may be representative of operations that may be performed in some embodiments bycircuitry502 in apparatus500 ofFIG. 5. As shown inFIG. 6, a plurality of public virtual memory spaces may be defined at602. For example, virtualmachine management component506 ofFIG. 5 may define a plurality of public virtual memory spaces, which may include publicvirtual memory space526. At604, each one of the plurality of public virtual memory spaces may be assigned to a respective one of a plurality of virtual machines including a first virtual machine and a second virtual machine. For example, virtualmachine management component506 ofFIG. 5 may assign each one of a plurality of public virtual memory spaces to a respective one of a plurality of virtual machines including a virtual machine corresponding to virtual machine process508-1 and a virtual machine corresponding to virtual machine process508-2.
At606, a first virtual machine process may be executed that corresponds to the first virtual machine, and a second virtual machine process may be executed that corresponds to the second virtual machine. For example, virtualmachine management component506 ofFIG. 5 may execute virtual machine process508-2, which may correspond to a first virtual machine, and may execute virtual machine process508-2, which may correspond to a second virtual machine. At608, a shared data notification may be generated to notify the second virtual machine of the presence of encrypted data in a public virtual memory space assigned to the first virtual machine. For example, virtualmachine management component506 ofFIG. 5 may generate a shareddata notification534 in order to notify the virtual machine corresponding to virtual machine process508-2 of the presence of encrypted data532 in a publicvirtual memory space526 assigned to the virtual machine corresponding to virtual machine process508-1. The embodiments are not limited to these examples.
FIG. 7 illustrates an embodiment of astorage medium700.Storage medium700 may comprise any non-transitory computer-readable storage medium or machine-readable storage medium, such as an optical, magnetic or semiconductor storage medium. In various embodiments,storage medium700 may comprise an article of manufacture. In some embodiments,storage medium700 may store computer-executable instructions, such as computer-executable instructions to implementlogic flow600 ofFIG. 6. Examples of a computer-readable storage medium or machine-readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer-executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The embodiments are not limited in this context.
FIG. 8 illustrates an embodiment of anexemplary computing architecture800 suitable for implementing various embodiments as previously described. In various embodiments, thecomputing architecture800 may comprise or be implemented as part of an electronic device. In some embodiments, thecomputing architecture800 may be representative, for example, of apparatus500 ofFIG. 5. The embodiments are not limited in this context.
As used in this application, the terms “system” and “component” and “module” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution, examples of which are provided by theexemplary computing architecture800. For example, a component can be, but is not limited to being, a process running on a processor, a processor, a hard disk drive, multiple storage drives (of optical and/or magnetic storage medium), an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. Further, components may be communicatively coupled to each other by various types of communications media to coordinate operations. The coordination may involve the uni-directional or bi-directional exchange of information. For instance, the components may communicate information in the form of signals communicated over the communications media. The information can be implemented as signals allocated to various signal lines. In such allocations, each message is a signal. Further embodiments, however, may alternatively employ data messages. Such data messages may be sent across various connections. Exemplary connections include parallel interfaces, serial interfaces, and bus interfaces.
Thecomputing architecture800 includes various common computing elements, such as one or more processors, multi-core processors, co-processors, memory units, chipsets, controllers, peripherals, interfaces, oscillators, timing devices, video cards, audio cards, multimedia input/output (I/O) components, power supplies, and so forth. The embodiments, however, are not limited to implementation by thecomputing architecture800.
As shown inFIG. 8, thecomputing architecture800 comprises aprocessing unit804, asystem memory806 and asystem bus808. Theprocessing unit804 can be any of various commercially available processors, including without limitation an AMD® Athlon®, Duron® and Opteron® processors; ARM® application, embedded and secure processors; IBM® and Motorola® DragonBall® and PowerPC® processors; IBM and Sony® Cell processors; Intel® Celeron®, Core (2) Duo®, Itanium®, Pentium®, Xeon®, and XScale® processors; and similar processors. Dual microprocessors, multi-core processors, and other multi processor architectures may also be employed as theprocessing unit804.
Thesystem bus808 provides an interface for system components including, but not limited to, thesystem memory806 to theprocessing unit804. Thesystem bus808 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. Interface adapters may connect to thesystem bus808 via a slot architecture. Example slot architectures may include without limitation Accelerated Graphics Port (AGP), Card Bus, (Extended) Industry Standard Architecture ((E)ISA), Micro Channel Architecture (MCA), NuBus, Peripheral Component Interconnect (Extended) (PCI(X)), PCI Express, Personal Computer Memory Card International Association (PCMCIA), and the like.
Thesystem memory806 may include various types of computer-readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory such as ferroelectric polymer memory, ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, an array of devices such as Redundant Array of Independent Disks (RAID) drives, solid state memory devices (e.g., USB memory, solid state drives (SSD) and any other type of storage media suitable for storing information. In the illustrated embodiment shown inFIG. 8, thesystem memory806 can includenon-volatile memory810 and/orvolatile memory812. A basic input/output system (BIOS) can be stored in thenon-volatile memory810.
Thecomputer802 may include various types of computer-readable storage media in the form of one or more lower speed memory units, including an internal (or external) hard disk drive (HDD)814, a magnetic floppy disk drive (FDD)816 to read from or write to a removablemagnetic disk818, and anoptical disk drive820 to read from or write to a removable optical disk822 (e.g., a CD-ROM or DVD). TheHDD814,FDD816 andoptical disk drive820 can be connected to thesystem bus808 by aHDD interface824, anFDD interface826 and anoptical drive interface828, respectively. TheHDD interface824 for external drive implementations can include at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies.
The drives and associated computer-readable media provide volatile and/or nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For example, a number of program modules can be stored in the drives andmemory units810,812, including anoperating system830, one ormore application programs832,other program modules834, andprogram data836. In one embodiment, the one ormore application programs832,other program modules834, andprogram data836 can include, for example, various applications and/or components of apparatus500 ofFIG. 5.
A user can enter commands and information into thecomputer802 through one or more wire/wireless input devices, for example, akeyboard838 and a pointing device, such as amouse840. Other input devices may include microphones, infra-red (IR) remote controls, radio-frequency (RF) remote controls, game pads, stylus pens, card readers, dongles, finger print readers, gloves, graphics tablets, joysticks, keyboards, retina readers, touch screens (e.g., capacitive, resistive, etc.), trackballs, trackpads, sensors, styluses, and the like. These and other input devices are often connected to theprocessing unit804 through aninput device interface842 that is coupled to thesystem bus808, but can be connected by other interfaces such as a parallel port, IEEE 1394 serial port, a game port, a USB port, an IR interface, and so forth.
Amonitor844 or other type of display device is also connected to thesystem bus808 via an interface, such as avideo adaptor846. Themonitor844 may be internal or external to thecomputer802. In addition to themonitor844, a computer typically includes other peripheral output devices, such as speakers, printers, and so forth.
Thecomputer802 may operate in a networked environment using logical connections via wire and/or wireless communications to one or more remote computers, such as aremote computer848. Theremote computer848 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to thecomputer802, although, for purposes of brevity, only a memory/storage device850 is illustrated. The logical connections depicted include wire/wireless connectivity to a local area network (LAN)852 and/or larger networks, for example, a wide area network (WAN)854. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, for example, the Internet.
When used in a LAN networking environment, thecomputer802 is connected to theLAN852 through a wire and/or wireless communication network interface oradaptor856. Theadaptor856 can facilitate wire and/or wireless communications to theLAN852, which may also include a wireless access point disposed thereon for communicating with the wireless functionality of theadaptor856.
When used in a WAN networking environment, thecomputer802 can include amodem858, or is connected to a communications server on theWAN854, or has other means for establishing communications over theWAN854, such as by way of the Internet. Themodem858, which can be internal or external and a wire and/or wireless device, connects to thesystem bus808 via theinput device interface842. In a networked environment, program modules depicted relative to thecomputer802, or portions thereof, can be stored in the remote memory/storage device850. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.
Thecomputer802 is operable to communicate with wire and wireless devices or entities using theIEEE 802 family of standards, such as wireless devices operatively disposed in wireless communication (e.g., IEEE 802.16 over-the-air modulation techniques). This includes at least Wi-Fi (or Wireless Fidelity), WiMax, and Bluetooth™ wireless technologies, among others. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. Wi-Fi networks use radio technologies called IEEE 802.11x (a, b, g, n, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wire networks (which use IEEE 802.3-related media and functions). In some embodiments, communications over such a wired network may be performed via a fabric interface, such as an InfiniB and interface or an Intel® Omni-Path Fabric interface. The embodiments are not limited to these examples.
FIG. 9 illustrates a block diagram of an exemplary communications architecture900 suitable for implementing various embodiments as previously described. The communications architecture900 includes various common communications elements, such as a transmitter, receiver, transceiver, radio, network interface, baseband processor, antenna, amplifiers, filters, power supplies, and so forth. The embodiments, however, are not limited to implementation by the communications architecture900.
As shown inFIG. 9, the communications architecture900 comprises includes one ormore clients902 andservers904. Theclients902 and theservers904 are operatively connected to one or more respectiveclient data stores908 andserver data stores910 that can be employed to store information local to therespective clients902 andservers904, such as cookies and/or associated contextual information. Any one ofclients902 and/orservers904 may implement one or more of apparatus500 ofFIG. 5,logic flow600 ofFIG. 6,storage medium700 ofFIG. 7, andcomputing architecture800 ofFIG. 8. In various embodiments, apparatus500 ofFIG. 5 may be implemented in one or more switching devices and/or routing devices incommunication framework906.
Theclients902 and theservers904 may communicate information between each other using acommunication framework906. Thecommunications framework906 may implement any well-known communications techniques and protocols. Thecommunications framework906 may be implemented as a packet-switched network (e.g., public networks such as the Internet, private networks such as an enterprise intranet, and so forth), a circuit-switched network (e.g., the public switched telephone network), or a combination of a packet-switched network and a circuit-switched network (with suitable gateways and translators).
Thecommunications framework906 may implement various network interfaces arranged to accept, communicate, and connect to a communications network. A network interface may be regarded as a specialized form of an input output interface. Network interfaces may employ connection protocols including without limitation direct connect, Ethernet (e.g., thick, thin, twisted pair 10/100/1000 Base T, and the like), token ring, wireless network interfaces, cellular network interfaces, IEEE 802.11a-x network interfaces, IEEE 802.16 network interfaces, IEEE 802.20 network interfaces, and the like. Further, multiple network interfaces may be used to engage with various communications network types. For example, multiple network interfaces may be employed to allow for the communication over broadcast, multicast, and unicast networks. Should processing requirements dictate a greater amount speed and capacity, distributed network controller architectures may similarly be employed to pool, load balance, and otherwise increase the communicative bandwidth required byclients902 and theservers904. A communications network may be any one and the combination of wired and/or wireless networks including without limitation a direct interconnection, a secured custom connection, a private network (e.g., an enterprise intranet), a public network (e.g., the Internet), a Personal Area Network (PAN), a Local Area Network (LAN), a Metropolitan Area Network (MAN), an Operating Missions as Nodes on the Internet (OMNI), a Wide Area Network (WAN), a wireless network, a cellular network, and other communications networks.
Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
As used herein, the term “circuitry” may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group), and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. In some embodiments, the circuitry may be implemented in, or functions associated with the circuitry may be implemented by, one or more software or firmware modules. In some embodiments, circuitry may include logic, at least partially operable in hardware.
One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. Some embodiments may be implemented, for example, using a machine-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The machine-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disk Read Only Memory (CD-ROM), Compact Disk Recordable (CD-R), Compact Disk Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disk (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
The following examples pertain to further embodiments:
Example 1 is an apparatus, comprising circuitry, a virtual machine management component for execution by the circuitry to define a plurality of public virtual memory spaces, and assign each one of the plurality of public virtual memory spaces to a respective one of a plurality of virtual machines including a first virtual machine and a second virtual machine, and a virtual machine execution component for execution by the circuitry to execute a first virtual machine process corresponding to the first virtual machine and a second virtual machine process corresponding to the second virtual machine, the first virtual machine process to identify data to be provided to the second virtual machine by the first virtual machine and provide the data to the second virtual machine by writing to a public virtual memory space assigned to the first virtual machine.
Example 2 is the apparatus of Example 1, the first virtual machine process to retrieve the data from a private virtual memory space of the first virtual machine.
Example 3 is the apparatus of any of Examples 1 to 2, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine.
Example 4 is the apparatus of Example 3, the first virtual machine process to encrypt the data using a symmetric encryption key.
Example 5 is the apparatus of Example 4, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 6 is the apparatus of any of Examples 4 to 5, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 7 is the apparatus of Example 1, the first virtual machine process to encrypt the data using an asymmetric encryption key.
Example 8 is the apparatus of Example 7, the asymmetric key to comprise a public key of a private/public key pair.
Example 9 is the apparatus of Example 8, the public key to comprise a dedicated key for use in encryption of data to be provided to the second virtual machine.
Example 10 is the apparatus of any of Examples 8 to 9, the second virtual machine process to decrypt the encrypted data using a private key of the private/public key pair.
Example 11 is the apparatus of Example 10, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 12 is the apparatus of any of Examples 1 to 11, the second virtual machine process to obtain the data by accessing the public virtual memory space assigned to the first virtual machine.
Example 13 is the apparatus of Example 12, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine, the second virtual machine process to retrieve the encrypted data from the public virtual memory space assigned to the first virtual machine and decrypt the encrypted data.
Example 14 is the apparatus of Example 13, the second virtual machine process to decrypt the encrypted data using a symmetric encryption key.
Example 15 is the apparatus of Example 14, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 16 is the apparatus of any of Examples 14 to 15, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 17 is the apparatus of Example 13, the second virtual machine process to decrypt the encrypted data using an asymmetric encryption key.
Example 18 is the apparatus of Example 17, the asymmetric key to comprise a private key of a private/public key pair.
Example 19 is the apparatus of Example 18, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 20 is the apparatus of any of Examples 18 to 19, the first virtual machine process to encrypt the data using a public key of the private/public key pair.
Example 21 is the apparatus of Example 20, the public key to comprise a dedicated public key for use in encryption of data to be provided to the second virtual machine.
Example 22 is the apparatus of any of Examples 20 to 21, the virtual machine management component for execution by the circuitry to publish the public key on behalf of the second virtual machine.
Example 23 is the apparatus of any of Examples 13 to 22, the virtual machine management component for execution by the circuitry to generate a shared data notification to notify the second virtual machine of the presence of the encrypted data in the public virtual memory space assigned to the first virtual machine.
Example 24 is the apparatus of Example 23, the shared data notification to identify one or more virtual memory pages comprising the encrypted data.
Example 25 is the apparatus of any of Examples 23 to 24, the shared data notification to identify the first virtual machine as a source of the encrypted data.
Example 26 is the apparatus of any of Examples 23 to 25, the shared data notification to identify the second virtual machine as an intended recipient of the encrypted data.
Example 27 is the apparatus of any of Examples 23 to 26, the second virtual machine process to retrieve and decrypt the encrypted data in response to the shared data notification.
Example 28 is a system, comprising an apparatus according to any of Examples 1 to 27, and at least one network interface.
Example 29 is a method, comprising defining a plurality of public virtual memory spaces, assigning each one of the plurality of public virtual memory spaces to a respective one of a plurality of virtual machines including a first virtual machine and a second virtual machine, and executing, by processing circuitry, a first virtual machine process corresponding to the first virtual machine and a second virtual machine process corresponding to the second virtual machine, the first virtual machine process to identify data to be provided to the second virtual machine by the first virtual machine and provide the data to the second virtual machine by writing to a public virtual memory space assigned to the first virtual machine.
Example 30 is the method of Example 29, the first virtual machine process to retrieve the data from a private virtual memory space of the first virtual machine.
Example 31 is the method of any of Examples 29 to 30, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine.
Example 32 is the method of Example 31, the first virtual machine process to encrypt the data using a symmetric encryption key.
Example 33 is the method of Example 32, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 34 is the method of any of Examples 32 to 33, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 35 is the method of Example 29, the first virtual machine process to encrypt the data using an asymmetric encryption key.
Example 36 is the method of Example 35, the asymmetric key to comprise a public key of a private/public key pair.
Example 37 is the method of Example 36, the public key to comprise a dedicated key for use in encryption of data to be provided to the second virtual machine.
Example 38 is the method of any of Examples 36 to 37, the second virtual machine process to decrypt the encrypted data using a private key of the private/public key pair.
Example 39 is the method of Example 38, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 40 is the method of any of Examples 29 to 39, the second virtual machine process to obtain the data by accessing the public virtual memory space assigned to the first virtual machine.
Example 41 is the method of Example 40, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine, the second virtual machine process to retrieve the encrypted data from the public virtual memory space assigned to the first virtual machine and decrypt the encrypted data.
Example 42 is the method of Example 41, the second virtual machine process to decrypt the encrypted data using a symmetric encryption key.
Example 43 is the method of Example 42, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 44 is the method of any of Examples 42 to 43, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 45 is the method of Example 41, the second virtual machine process to decrypt the encrypted data using an asymmetric encryption key.
Example 46 is the method of Example 45, the asymmetric key to comprise a private key of a private/public key pair.
Example 47 is the method of Example 46, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 48 is the method of any of Examples 46 to 47, the first virtual machine process to encrypt the data using a public key of the private/public key pair.
Example 49 is the method of Example 48, the public key to comprise a dedicated public key for use in encryption of data to be provided to the second virtual machine.
Example 50 is the method of any of Examples 48 to 49, comprising publishing the public key on behalf of the second virtual machine.
Example 51 is the method of any of Examples 41 to 50, comprising generating a shared data notification to notify the second virtual machine of the presence of the encrypted data in the public virtual memory space assigned to the first virtual machine.
Example 52 is the method of Example 51, the shared data notification to identify one or more virtual memory pages comprising the encrypted data.
Example 53 is the method of any of Examples 51 to 52, the shared data notification to identify the first virtual machine as a source of the encrypted data.
Example 54 is the method of any of Examples 51 to 53, the shared data notification to identify the second virtual machine as an intended recipient of the encrypted data.
Example 55 is the method of any of Examples 51 to 54, the second virtual machine process to retrieve and decrypt the encrypted data in response to the shared data notification.
Example 56 is at least one computer-readable storage medium comprising a set of instructions that, in response to being executed on a computing device, cause the computing device to perform a method according to any of Examples 29 to 55.
Example 57 is an apparatus, comprising means for performing a method according to any of Examples 29 to 55.
Example 58 is a system, comprising the apparatus of Example 57, and at least one network interface.
Example 59 is at least one computer-readable storage medium comprising a set of instructions that, in response to being executed on a computing device, cause the computing device to define a plurality of public virtual memory spaces, assign each one of the plurality of public virtual memory spaces to a respective one of a plurality of virtual machines including a first virtual machine and a second virtual machine, and execute a first virtual machine process corresponding to the first virtual machine and a second virtual machine process corresponding to the second virtual machine, the first virtual machine process to identify data to be provided to the second virtual machine by the first virtual machine and provide the data to the second virtual machine by writing to a public virtual memory space assigned to the first virtual machine.
Example 60 is the at least one computer-readable storage medium of Example 59, the first virtual machine process to retrieve the data from a private virtual memory space of the first virtual machine.
Example 61 is the at least one computer-readable storage medium of any of Examples 59 to 60, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine.
Example 62 is the at least one computer-readable storage medium of Example 61, the first virtual machine process to encrypt the data using a symmetric encryption key.
Example 63 is the at least one computer-readable storage medium of Example 62, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 64 is the at least one computer-readable storage medium of any of Examples 62 to 63, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 65 is the at least one computer-readable storage medium of Example 59, the first virtual machine process to encrypt the data using an asymmetric encryption key.
Example 66 is the at least one computer-readable storage medium of Example 65, the asymmetric key to comprise a public key of a private/public key pair.
Example 67 is the at least one computer-readable storage medium of Example 66, the public key to comprise a dedicated key for use in encryption of data to be provided to the second virtual machine.
Example 68 is the at least one computer-readable storage medium of any of Examples 66 to 67, the second virtual machine process to decrypt the encrypted data using a private key of the private/public key pair.
Example 69 is the at least one computer-readable storage medium of Example 68, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 70 is the at least one computer-readable storage medium of any of Examples 59 to 69, the second virtual machine process to obtain the data by accessing the public virtual memory space assigned to the first virtual machine.
Example 71 is the at least one computer-readable storage medium of Example 70, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine, the second virtual machine process to retrieve the encrypted data from the public virtual memory space assigned to the first virtual machine and decrypt the encrypted data.
Example 72 is the at least one computer-readable storage medium of Example 71, the second virtual machine process to decrypt the encrypted data using a symmetric encryption key.
Example 73 is the at least one computer-readable storage medium of Example 72, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 74 is the at least one computer-readable storage medium of any of Examples 72 to 73, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 75 is the at least one computer-readable storage medium of Example 71, the second virtual machine process to decrypt the encrypted data using an asymmetric encryption key.
Example 76 is the at least one computer-readable storage medium of Example 75, the asymmetric key to comprise a private key of a private/public key pair.
Example 77 is the at least one computer-readable storage medium of Example 76, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 78 is the at least one computer-readable storage medium of any of Examples 76 to 77, the first virtual machine process to encrypt the data using a public key of the private/public key pair.
Example 79 is the at least one computer-readable storage medium of Example 78, the public key to comprise a dedicated public key for use in encryption of data to be provided to the second virtual machine.
Example 80 is the at least one computer-readable storage medium of any of Examples 78 to 79, comprising instructions that, in response to being executed on the computing device, cause the computing device to publish the public key on behalf of the second virtual machine.
Example 81 is the at least one computer-readable storage medium of any of Examples 71 to 80, comprising instructions that, in response to being executed on the computing device, cause the computing device to generate a shared data notification to notify the second virtual machine of the presence of the encrypted data in the public virtual memory space assigned to the first virtual machine.
Example 82 is the at least one computer-readable storage medium of Example 81, the shared data notification to identify one or more virtual memory pages comprising the encrypted data.
Example 83 is the at least one computer-readable storage medium of any of Examples 81 to 82, the shared data notification to identify the first virtual machine as a source of the encrypted data.
Example 84 is the at least one computer-readable storage medium of any of Examples 81 to 83, the shared data notification to identify the second virtual machine as an intended recipient of the encrypted data.
Example 85 is the at least one computer-readable storage medium of any of Examples 81 to 84, the second virtual machine process to retrieve and decrypt the encrypted data in response to the shared data notification.
Example 86 is an apparatus, comprising means for defining a plurality of public virtual memory spaces, means for assigning each one of the plurality of public virtual memory spaces to a respective one of a plurality of virtual machines including a first virtual machine and a second virtual machine, and means for executing a first virtual machine process corresponding to the first virtual machine and a second virtual machine process corresponding to the second virtual machine, the first virtual machine process to identify data to be provided to the second virtual machine by the first virtual machine and provide the data to the second virtual machine by writing to a public virtual memory space assigned to the first virtual machine.
Example 87 is the apparatus of Example 86, the first virtual machine process to retrieve the data from a private virtual memory space of the first virtual machine.
Example 88 is the apparatus of any of Examples 86 to 87, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine.
Example 89 is the apparatus of Example 88, the first virtual machine process to encrypt the data using a symmetric encryption key.
Example 90 is the apparatus of Example 89, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 91 is the apparatus of any of Examples 89 to 90, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 92 is the apparatus of Example 86, the first virtual machine process to encrypt the data using an asymmetric encryption key.
Example 93 is the apparatus of Example 92, the asymmetric key to comprise a public key of a private/public key pair.
Example 94 is the apparatus of Example 93, the public key to comprise a dedicated key for use in encryption of data to be provided to the second virtual machine.
Example 95 is the apparatus of any of Examples 93 to 94, the second virtual machine process to decrypt the encrypted data using a private key of the private/public key pair.
Example 96 is the apparatus of Example 95, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 97 is the apparatus of any of Examples 86 to 96, the second virtual machine process to obtain the data by accessing the public virtual memory space assigned to the first virtual machine.
Example 98 is the apparatus of Example 97, the first virtual machine process to encrypt the data and write the encrypted data to the public virtual memory space assigned to the first virtual machine, the second virtual machine process to retrieve the encrypted data from the public virtual memory space assigned to the first virtual machine and decrypt the encrypted data.
Example 99 is the apparatus of Example 98, the second virtual machine process to decrypt the encrypted data using a symmetric encryption key.
Example 100 is the apparatus of Example 99, the symmetric encryption key to comprise an Advanced Encryption Standard (AES) encryption key.
Example 101 is the apparatus of any of Examples 99 to 100, the symmetric encryption key to comprise a dedicated key for use in encryption and decryption of data to be provided to the second virtual machine by the first virtual machine.
Example 102 is the apparatus of Example 98, the second virtual machine process to decrypt the encrypted data using an asymmetric encryption key.
Example 103 is the apparatus of Example 102, the asymmetric key to comprise a private key of a private/public key pair.
Example 104 is the apparatus of Example 103, the private key to comprise a dedicated key for use in decryption of encrypted data provided to the second virtual machine.
Example 105 is the apparatus of any of Examples 103 to 104, the first virtual machine process to encrypt the data using a public key of the private/public key pair.
Example 106 is the apparatus of Example 105, the public key to comprise a dedicated public key for use in encryption of data to be provided to the second virtual machine.
Example 107 is the apparatus of any of Examples 105 to 106, comprising means for publishing the public key on behalf of the second virtual machine.
Example 108 is the apparatus of any of Examples 98 to 107, comprising means for generating a shared data notification to notify the second virtual machine of the presence of the encrypted data in the public virtual memory space assigned to the first virtual machine.
Example 109 is the apparatus of Example 108, the shared data notification to identify one or more virtual memory pages comprising the encrypted data.
Example 110 is the apparatus of any of Examples 108 to 109, the shared data notification to identify the first virtual machine as a source of the encrypted data.
Example 111 is the apparatus of any of Examples 108 to 110, the shared data notification to identify the second virtual machine as an intended recipient of the encrypted data.
Example 112 is the apparatus of any of Examples 108 to 111, the second virtual machine process to retrieve and decrypt the encrypted data in response to the shared data notification.
Example 113 is a system, comprising an apparatus according to any of Examples 86 to 112, and at least one network interface.
Numerous specific details have been set forth herein to provide a thorough understanding of the embodiments. It will be understood by those skilled in the art, however, that the embodiments may be practiced without these specific details. In other instances, well-known operations, components, and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. These terms are not intended as synonyms for each other. For example, some embodiments may be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
Unless specifically stated otherwise, it may be appreciated that terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. The embodiments are not limited in this context.
It should be noted that the methods described herein do not have to be executed in the order described, or in any particular order. Moreover, various activities described with respect to the methods identified herein can be executed in serial or parallel fashion.
Although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combinations of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. Thus, the scope of various embodiments includes any other applications in which the above compositions, structures, and methods are used.
It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate preferred embodiment. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively. Moreover, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (21)

What is claimed is:
1. An apparatus, comprising:
circuitry;
memory storing instructions for execution by the circuitry to:
define a plurality of shared virtual memory spaces;
assign respective subsets of the plurality of shared virtual memory spaces to respective ones of a plurality of virtual machines executed on the circuitry, a first subset of the plurality of shared virtual memory spaces assigned to a first virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the first subset of the plurality of shared virtual memory spaces comprising a respective mailbox for the first virtual machine to provide data to respective ones of the remaining plurality of virtual machines, a second subset of the plurality of shared virtual memory spaces assigned to a second virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the second subset comprising a respective mailbox for the second virtual machine to provide data to respective ones of the remaining plurality of virtual machines;
encrypt a first data by the first virtual machine;
write, by the first virtual machine to a first shared virtual memory space of the first subset of the plurality of shared virtual memory spaces, the encrypted first data to share the encrypted first data with a second virtual machine of the plurality of virtual machines;
read, by the second virtual machine, the encrypted first data in the first shared virtual memory space; and
decrypt, by the second virtual machine, the encrypted first data.
2. The apparatus ofclaim 1, the memory storing instructions for execution by the circuitry to define the plurality of shared virtual memory spaces according to a paged virtual memory scheme.
3. The apparatus ofclaim 1, the memory storing instructions for execution by the circuitry to generate a shared data notification to notify the second virtual machine of the presence of the encrypted first data in the first shared virtual memory space.
4. The apparatus ofclaim 1, the second virtual machine not permitted to write to the first shared virtual memory space, the first subset of the plurality of shared virtual memory spaces writable only by the first virtual machine, respective ones of the shared virtual memory spaces of the first subset of the plurality of shared virtual memory spaces readable by respective ones of the remaining plurality of virtual machines.
5. The apparatus ofclaim 1, the first virtual machine to encrypt the first data using a symmetric key, the second virtual machine to decrypt the encrypted first data using the symmetric key.
6. The apparatus ofclaim 5, the first virtual machine to encrypt the symmetric key using a public key of a private/public key pair, the second virtual machine to decrypt the encrypted symmetric key using a private key of the private/public key pair.
7. A method, comprising:
defining, by processing circuitry, a plurality of shared virtual memory spaces;
assigning, by the processing circuitry, respective subsets of the plurality of shared virtual memory spaces to respective ones of a plurality of virtual machines executed on the circuitry, a first subset of the plurality of shared virtual memory spaces assigned to a first virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the first subset of the plurality of shared virtual memory spaces comprising a respective mailbox for the first virtual machine to provide data to respective ones of the remaining plurality of virtual machines, a second subset of the plurality of shared virtual memory spaces assigned to a second virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the second subset comprising a respective mailbox for the second virtual machine to provide data to respective ones of the remaining plurality of virtual machines;
encrypting a first data by the first virtual machine;
writing, by the first virtual machine to a first shared virtual memory space of the first subset of the plurality of shared virtual memory spaces, the encrypted first data to share the encrypted first data with the second virtual machine;
reading, by the second virtual machine, encrypted first data in the first shared virtual memory space; and
decrypting, by the second virtual machine, the encrypted first data.
8. The method ofclaim 7, further comprising:
defining the plurality of shared virtual memory spaces according to a paged virtual memory scheme.
9. The method ofclaim 7, further comprising:
generating a shared data notification to notify the second virtual machine of the presence of the encrypted first data in the first shared virtual memory space.
10. The method ofclaim 7, the second virtual machine not permitted to write to the first shared virtual memory space, the first subset of the plurality of shared virtual memory spaces writable only by the first virtual machine, respective ones of the shared virtual memory spaces of the first subset of the plurality of shared virtual memory spaces readable by respective ones of the remaining plurality of virtual machines.
11. The method ofclaim 7, the first virtual machine to encrypt the first data using a symmetric key, the second virtual machine to decrypt the encrypted first data using the symmetric key.
12. The method ofclaim 11, the first virtual machine to encrypt the symmetric key using a public key of a private/public key pair, the second virtual machine to decrypt the encrypted symmetric key using a private key of the private/public key pair.
13. A non-transitory computer-readable storage medium comprising instructions that, in response to being executed by a processor, cause the processor to:
define a plurality of shared virtual memory spaces;
assign respective subsets of the plurality of shared virtual memory spaces to respective ones of a plurality of virtual machines executed on the processor, a first subset of the plurality of shared virtual memory spaces assigned to a first virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the first subset of the plurality of shared virtual memory spaces comprising a respective mailbox for the first virtual machine to provide data to respective ones of the remaining plurality of virtual machines, a second subset of the plurality of shared virtual memory spaces assigned to a second virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the second subset comprising a respective mailbox for the second virtual machine to provide data to respective ones of the remaining plurality of virtual machines;
encrypt a first data by the first virtual machine;
write, by the first virtual machine to a first shared virtual memory space of the first subset of the plurality of shared virtual memory spaces, the encrypted first data to share the encrypted first data with the second virtual machine;
read, by the second virtual machine, encrypted first data in the first shared virtual memory space; and
decrypt, by the second virtual machine, the encrypted first data.
14. The non-transitory computer-readable storage medium ofclaim 13, comprising instructions that in response to being executed on the processor cause the processor to:
define the plurality of shared virtual memory spaces according to a paged virtual memory scheme.
15. The non-transitory computer-readable storage medium ofclaim 13, comprising instructions that in response to being executed on the processor cause the processor to:
generate a shared data notification to notify the second virtual machine of the presence of the encrypted first data in the first shared virtual memory space.
16. The non-transitory computer-readable storage medium ofclaim 13, the second virtual machine not permitted to write to the first shared virtual memory space, the first subset of the plurality of shared virtual memory spaces writable only by the first virtual machine, respective ones of the shared virtual memory spaces of the first subset of the plurality of shared virtual memory spaces readable by respective ones of the remaining plurality of virtual machines.
17. The non-transitory computer-readable storage medium ofclaim 13, the first virtual machine to encrypt the first data using a symmetric key, the second virtual machine to decrypt the encrypted first data using the symmetric key.
18. The non-transitory computer-readable storage medium ofclaim 17, the first virtual machine to encrypt the symmetric key using a public key of a private/public key pair, the second virtual machine to decrypt the encrypted symmetric key using a private key of the private/public key pair.
19. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processor, cause the processor to:
define a plurality of shared virtual memory spaces, respective ones of the plurality of shared virtual memory spaces associated with a respective encryption key of a plurality of encryption keys;
assign respective subsets of the plurality of shared virtual memory spaces to respective ones of a plurality of virtual machines, a first subset of the plurality of shared virtual memory spaces assigned to a first virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the first subset of the plurality of shared virtual memory spaces to comprise a respective mailbox for the first virtual machine to provide data to respective ones of the remaining plurality of virtual machines, a second subset of the plurality of shared virtual memory spaces assigned to a second virtual machine of the plurality of virtual machines, respective ones of the shared virtual memory spaces of the second subset comprising a respective mailbox for the second virtual machine to provide data to respective ones of the remaining plurality of virtual machines;
provide a first encryption key of the plurality of encryption keys to the first virtual machine, the first encryption key associated with the first shared virtual memory space, the first virtual machine to encrypt data to be written to the first shared virtual memory space based on the first encryption key; and
provide the first encryption key to the second virtual machine, the second virtual machine to decrypt data in the first shared virtual memory space based on the first encryption key.
20. The non-transitory computer-readable storage medium ofclaim 19, comprising instructions that when executed by the processor cause the processor to:
provide the first encryption key to the remaining plurality of virtual machines, the remaining plurality of virtual machines to decrypt data in the first shared virtual memory space based on the first encryption key.
21. The non-transitory computer-readable storage medium ofclaim 19, comprising instructions that when executed by the processor cause the processor to:
provide the plurality of encryption keys to the plurality of virtual machines.
US16/810,4002015-12-242020-03-05Scalable techniques for data transfer between virtual machinesActiveUS11494220B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US16/810,400US11494220B2 (en)2015-12-242020-03-05Scalable techniques for data transfer between virtual machines

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US14/998,361US10628192B2 (en)2015-12-242015-12-24Scalable techniques for data transfer between virtual machines
US16/810,400US11494220B2 (en)2015-12-242020-03-05Scalable techniques for data transfer between virtual machines

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US14/998,361ContinuationUS10628192B2 (en)2015-12-242015-12-24Scalable techniques for data transfer between virtual machines

Publications (2)

Publication NumberPublication Date
US20200201668A1 US20200201668A1 (en)2020-06-25
US11494220B2true US11494220B2 (en)2022-11-08

Family

ID=59087321

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US14/998,361ActiveUS10628192B2 (en)2015-12-242015-12-24Scalable techniques for data transfer between virtual machines
US16/810,400ActiveUS11494220B2 (en)2015-12-242020-03-05Scalable techniques for data transfer between virtual machines

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US14/998,361ActiveUS10628192B2 (en)2015-12-242015-12-24Scalable techniques for data transfer between virtual machines

Country Status (4)

CountryLink
US (2)US10628192B2 (en)
CN (1)CN108370382B (en)
DE (1)DE112016006047T5 (en)
WO (1)WO2017112325A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10666443B2 (en)*2016-10-182020-05-26Red Hat, Inc.Continued verification and monitoring of application code in containerized execution environment
US10528746B2 (en)*2016-12-272020-01-07Intel CorporationSystem, apparatus and method for trusted channel creation using execute-only code
CN111937362A (en)*2018-06-292020-11-13英特尔公司Virtual storage service for client computing devices
US11023179B2 (en)*2018-11-182021-06-01Pure Storage, Inc.Cloud-based storage system storage management
US11943340B2 (en)*2019-04-192024-03-26Intel CorporationProcess-to-process secure data movement in network functions virtualization infrastructures
US11099911B1 (en)2019-07-012021-08-24Northrop Grumman Systems CorporationSystems and methods for inter-partition communication
US11874777B2 (en)*2021-12-162024-01-16International Business Machines CorporationSecure communication of virtual machine encrypted memory
US20230214247A1 (en)*2022-01-042023-07-06Red Hat, Inc.Robust resource removal for virtual machines
EP4250105A1 (en)*2022-03-222023-09-27Samsung Electronics Co., Ltd.Communication method between virtual machines using mailboxes, system-on chip performing communication method, and in-vehicle infotainment system including same

Citations (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6125430A (en)*1996-05-032000-09-26Compaq Computer CorporationVirtual memory allocation in a virtual address space having an inaccessible gap
US6351536B1 (en)*1997-10-012002-02-26Minoru SasakiEncryption network system and method
US20060095793A1 (en)*2004-10-082006-05-04International Business Machines CorporationSecure memory control parameters in table look aside buffer data fields and support memory array
US7401358B1 (en)*2002-04-182008-07-15Advanced Micro Devices, Inc.Method of controlling access to control registers of a microprocessor
US7739349B2 (en)*2006-10-052010-06-15Waratek Pty LimitedSynchronization with partial memory replication
US20100161908A1 (en)*2008-12-182010-06-24Lsi CorporationEfficient Memory Allocation Across Multiple Accessing Systems
US20100161879A1 (en)*2008-12-182010-06-24Lsi CorporationEfficient and Secure Main Memory Sharing Across Multiple Processors
US20100250866A1 (en)*2009-03-312010-09-30Fujitsu LimitedInformation processing program, information processing device and information processing method
US20120110275A1 (en)*2010-10-272012-05-03Ibm CorporationSupporting Virtual Input/Output (I/O) Server (VIOS) Active Memory Sharing in a Cluster Environment
US8316190B2 (en)*2007-04-062012-11-20Waratek Pty. Ltd.Computer architecture and method of operation for multi-computer distributed processing having redundant array of independent systems with replicated memory and code striping
US20120297381A1 (en)2011-05-202012-11-22Lsi CorporationSystem for a multi-tenant storage array via domains hosted on a virtual machine
US20120304171A1 (en)*2011-05-232012-11-29IO Turbine, Inc.Managing Data Input/Output Operations
CN103139159A (en)2011-11-282013-06-05上海贝尔股份有限公司Safety communication among virtual machines in cloud computing framework
US20140164791A1 (en)*2010-03-302014-06-12Novell, Inc.Secure virtual machine memory
US8943203B1 (en)2009-07-102015-01-27Netapp, Inc.System and method for storage and deployment of virtual machines in a virtual server environment
US20150033038A1 (en)*2004-04-082015-01-29Texas Instruments IncorporatedMethods, apparatus, and systems for secure demand paging and other paging operations for processor devices
US20150046661A1 (en)*2013-08-072015-02-12Qualcomm IncorporatedDynamic Address Negotiation for Shared Memory Regions in Heterogeneous Muliprocessor Systems
CN104468803A (en)2014-12-122015-03-25华为技术有限公司Virtual data center resource mapping method and equipment
US20150220709A1 (en)*2014-02-062015-08-06Electronics Telecommunications Research InstituteSecurity-enhanced device based on virtualization and the method thereof
US9619268B2 (en)*2014-08-232017-04-11Vmware, Inc.Rapid suspend/resume for virtual machines via resource sharing
US9916456B2 (en)*2012-04-062018-03-13Security First Corp.Systems and methods for securing and restoring virtual machines

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB2385951A (en)*2001-09-212003-09-03Sun Microsystems IncData encryption and decryption
US7181744B2 (en)*2002-10-242007-02-20International Business Machines CorporationSystem and method for transferring data between virtual machines or other computer entities
US8271604B2 (en)*2006-12-192012-09-18International Business Machines CorporationInitializing shared memories for sharing endpoints across a plurality of root complexes
US9027025B2 (en)*2007-04-172015-05-05Oracle International CorporationReal-time database exception monitoring tool using instance eviction data
KR100895298B1 (en)*2007-04-302009-05-07한국전자통신연구원 Apparatus, method and data processing elements for efficient parallel processing of multimedia data
US8359437B2 (en)2008-05-132013-01-22International Business Machines CorporationVirtual computing memory stacking
JP5245869B2 (en)*2009-01-292013-07-24富士通株式会社 Information processing apparatus, information processing method, and computer program
US8812796B2 (en)*2009-06-262014-08-19Microsoft CorporationPrivate memory regions and coherence optimizations
EP2603996A1 (en)*2010-08-112013-06-19Rick L. OrsiniSystems and methods for secure multi-tenant data storage
KR101671494B1 (en)*2010-10-082016-11-02삼성전자주식회사Multi Processor based on shared virtual memory and Method for generating address translation table
US9152573B2 (en)2010-11-162015-10-06Vmware, Inc.Sharing memory pages having regular expressions within a virtual machine
CN103064796B (en)*2011-10-182015-09-23财团法人工业技术研究院virtual machine memory sharing method and computer system
US9454392B2 (en)*2012-11-272016-09-27Red Hat Israel, Ltd.Routing data packets between virtual machines using shared memory without copying the data packet
US9729517B2 (en)*2013-01-222017-08-08Amazon Technologies, Inc.Secure virtual machine migration
JP6040101B2 (en)*2013-05-312016-12-07株式会社日立製作所 Storage device control method, storage device, and information processing device
US9124569B2 (en)*2013-06-142015-09-01Microsoft Technology Licensing, LlcUser authentication in a cloud environment
WO2015003312A1 (en)2013-07-092015-01-15Hua Zhong University Of Science TechnologyData communication on a virtual machine
US9251090B1 (en)*2014-06-032016-02-02Amazon Technologies, Inc.Hypervisor assisted virtual memory obfuscation
US9442752B1 (en)*2014-09-032016-09-13Amazon Technologies, Inc.Virtual secure execution environments
US9772962B2 (en)*2015-05-282017-09-26Red Hat Israel, Ltd.Memory sharing for direct memory access by a device assigned to a guest operating system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6125430A (en)*1996-05-032000-09-26Compaq Computer CorporationVirtual memory allocation in a virtual address space having an inaccessible gap
US6351536B1 (en)*1997-10-012002-02-26Minoru SasakiEncryption network system and method
US7401358B1 (en)*2002-04-182008-07-15Advanced Micro Devices, Inc.Method of controlling access to control registers of a microprocessor
US20150033038A1 (en)*2004-04-082015-01-29Texas Instruments IncorporatedMethods, apparatus, and systems for secure demand paging and other paging operations for processor devices
US20060095793A1 (en)*2004-10-082006-05-04International Business Machines CorporationSecure memory control parameters in table look aside buffer data fields and support memory array
US7739349B2 (en)*2006-10-052010-06-15Waratek Pty LimitedSynchronization with partial memory replication
US8316190B2 (en)*2007-04-062012-11-20Waratek Pty. Ltd.Computer architecture and method of operation for multi-computer distributed processing having redundant array of independent systems with replicated memory and code striping
US20100161908A1 (en)*2008-12-182010-06-24Lsi CorporationEfficient Memory Allocation Across Multiple Accessing Systems
US20100161879A1 (en)*2008-12-182010-06-24Lsi CorporationEfficient and Secure Main Memory Sharing Across Multiple Processors
US20100250866A1 (en)*2009-03-312010-09-30Fujitsu LimitedInformation processing program, information processing device and information processing method
US8943203B1 (en)2009-07-102015-01-27Netapp, Inc.System and method for storage and deployment of virtual machines in a virtual server environment
US20140164791A1 (en)*2010-03-302014-06-12Novell, Inc.Secure virtual machine memory
US20120110275A1 (en)*2010-10-272012-05-03Ibm CorporationSupporting Virtual Input/Output (I/O) Server (VIOS) Active Memory Sharing in a Cluster Environment
US20120297381A1 (en)2011-05-202012-11-22Lsi CorporationSystem for a multi-tenant storage array via domains hosted on a virtual machine
US20120304171A1 (en)*2011-05-232012-11-29IO Turbine, Inc.Managing Data Input/Output Operations
CN103139159A (en)2011-11-282013-06-05上海贝尔股份有限公司Safety communication among virtual machines in cloud computing framework
US9916456B2 (en)*2012-04-062018-03-13Security First Corp.Systems and methods for securing and restoring virtual machines
US20150046661A1 (en)*2013-08-072015-02-12Qualcomm IncorporatedDynamic Address Negotiation for Shared Memory Regions in Heterogeneous Muliprocessor Systems
US20150220709A1 (en)*2014-02-062015-08-06Electronics Telecommunications Research InstituteSecurity-enhanced device based on virtualization and the method thereof
US9619268B2 (en)*2014-08-232017-04-11Vmware, Inc.Rapid suspend/resume for virtual machines via resource sharing
CN104468803A (en)2014-12-122015-03-25华为技术有限公司Virtual data center resource mapping method and equipment

Also Published As

Publication numberPublication date
US10628192B2 (en)2020-04-21
CN108370382B (en)2022-03-15
US20200201668A1 (en)2020-06-25
WO2017112325A1 (en)2017-06-29
US20170187694A1 (en)2017-06-29
CN108370382A (en)2018-08-03
DE112016006047T5 (en)2018-09-06

Similar Documents

PublicationPublication DateTitle
US11494220B2 (en)Scalable techniques for data transfer between virtual machines
US10706143B2 (en)Techniques for secure-chip memory for trusted execution environments
US9489293B2 (en)Techniques for opportunistic data storage
US11239997B2 (en)Techniques for cipher system conversion
US10514943B2 (en)Method and apparatus for establishing system-on-chip (SOC) security through memory management unit (MMU) virtualization
US20190188028A1 (en)Paravirtualized access for device assignment by bar extension
US9268712B2 (en)Method, system and apparatus for region access control
CN116680037A (en)Data isolation method and device and electronic equipment
EP3547201B1 (en)Techniques for dynamic memory resource allocation among cryptographic domains
US9798485B2 (en)Path management techniques for storage networks
US10810137B2 (en)Physical address randomization for secure encrypted memory
US11960453B2 (en)Techniques for asynchronous snapshot invalidation
EP3353699A1 (en)Techniques for coordinating device boot security
US20190102321A1 (en)Techniques to provide access protection to shared virtual memory
US20170123943A1 (en)Distributed data storage and processing techniques
US9674141B2 (en)Techniques for implementing a secure mailbox in resource-constrained embedded systems
US11176091B2 (en)Techniques for dynamic multi-storage format database access
TWI286686B (en)Method and apparatus for multi-table accessing of input/output devices using target security
US20190377671A1 (en)Memory controller with memory resource memory management
US20230281113A1 (en)Adaptive memory metadata allocation
US20150370816A1 (en)Load-balancing techniques for auditing file accesses in a storage system
US9819738B2 (en)Access management techniques for storage networks
CN106250562B (en)Processing data information system
TW200521799A (en)A security USB digital data process card
TW201351144A (en)Substitute virtualized-memory page tables

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE


[8]ページ先頭

©2009-2025 Movatter.jp