CROSS-REFERENCE TO RELATED APPLICATIONS This application is related to commonly assigned and co-pending U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040178US1) entitled “Method, System and Program Product for Differentiating Between Virtual Hosts on Bus Transactions and Associating Allowable Memory Access for an Input/Output Adapter that Supports Virtualization”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040179US1) entitled “Virtualized I/O Adapter for a Multi-Processor Data Processing System”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040180US1) entitled “Virtualized Fibre Channel Adapter for a Multi-Processor Data Processing System”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040181US1) entitled “Interrupt Mechanism on an IO Adapter That Supports Virtualization”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040182US1) entitled “System and Method for Modification of Virtual Adapter Resources in a Logically Partitioned Data Processing System”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040183US1) entitled “Method, System, and Computer Program Product for Virtual Adapter Destruction on a Physical Adapter that Supports Virtual Adapters”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040184US1) entitled “System and Method of Virtual Resource Modification on a Physical Adapter that Supports Virtual Resources”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040185US1) entitled “System and Method for Destroying Virtual Resources in a Logically Partitioned Data Processing System”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040187US1) entitled “Association of Host Translations that are Associated to an Access Control Level on a PCI Bridge that Supports Virtualization”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040507US1) entitled “Method, Apparatus, and Computer Program Product for Coordinating Error Reporting and Reset Utilizing an I/O Adapter that Supports Virtualization”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040552US1) entitled “Method and System for Fully Trusted Adapter Validation of Addresses Referenced in a Virtual Host Transfer Request”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040553US1) entitled “System, Method, and Computer Program Product for a Fully Trusted Adapter Validation of Incoming Memory Mapped I/O Operations on a Physical Adapter that Supports Virtual Adapters or Virtual Resources”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040554US1) entitled “System and Method for Host Initialization for an Adapter that Supports Virtualization”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040555US1) entitled “Data Processing System, Method, and Computer Program Product for Creation and Initialization of a Virtual Adapter on a Physical Adapter that Supports Virtual Adapter Level Virtualization”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040556US1) entitled “System and Method for Virtual Resource Initialization on a Physical Adapter that Supports Virtual Resources”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040557US1) entitled “Method and System for Native Virtualization on a Partially Trusted Adapter Using Adapter Bus, Device and Function Number for Identification”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040558US1) entitled “Native Virtualization on a Partially Trusted Adapter Using PCI Host Memory Mapped Input/Output Memory Address for Identification”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040559US1) entitled “Native Virtualization on a Partially Trusted Adapter Using PCI Host Bus, Device, and Function Number for Identification; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040560US1) entitled “System and Method for Virtual Adapter Resource Allocation”; U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040561US1) entitled “System and Method for Providing Quality of Service in a Virtual Adapter”; and U.S. patent application Ser. No. ______ (Attorney Docket No. AUS920040562US1) entitled “System and Method for Managing Metrics Table Per Virtual Port in a Logically Partitioned Data Processing System” all of which are hereby incorporated by reference.
BACKGROUND OF THE INVENTION 1. Technical Field
The present invention relates generally to communication protocols between a host computer and an input/output (I/O) adapter. More specifically, the present invention provides an implementation for virtualizing resources in a physical I/O adapter. In particular, the present invention provides a method, apparatus, and computer instructions for efficient and flexible sharing of adapter resources among multiple operating system instances.
2. Description of Related Art
A partitioned server is one in which platform firmware, such as a hypervisor, manages multiple partitions (one operating system (OS) instance in each partition) and each partition has allocated resources: processor (processors or portion of a processor), memory, and I/O adapters. An example of platform firmware used in logical partitioned data processing systems is a hypervisor, which is available from International Business Machines Corporation. The hypervisor mediates data movement between partitions to insure that the only data approved by their respective owning partitions are involved.
Existing partitioned servers typically have three access control levels:
(1) Hypervisor level—This level is used to subdivide physical server resources (processor, memory and I/O) into one or more shared resource groups that are allocated to an operating system (OS) instance. This level is referred to as privileged, because it is the only level that can perform physical resource allocation.
(2) OS level—Each OS instance created by the hypervisor executes at this level. An OS instance may only access resources that have been allocated to the OS instance at the hypervisor level. Each OS instance is isolated from other OS instances through hardware and the resource allocations performed at the hypervisor level. The resources allocated to a single OS instance can be further subdivided into one or more shared resource groups that are allocated to an application instance.
(3) Application level—Each application instance created by the OS executes at this level. An application instance can only access resources that have been allocated to the application instance at the OS level. Each application instance is isolated from other application instances through hardware and the resource allocations performed at the OS level.
A problem encountered with using I/O adapters in virtualized systems is an inability of the I/O adapter to share its resources. Currently, I/O adapters provide a single bus space for all memory mapped I/O operations. Currently available I/O adapters do not have a mechanism to configure multiple address spaces per adapter, where (1) each address space is associated to particular access level (hypervisor, OS, and application, respectively); and (2) the I/O adapter in conjunction with virtual memory manager (VMM) provides access isolation between the various OS instances that share the I/O adapter, on different access levels.
Without a direct mechanism for sharing I/O adapters, OS instances do not share an I/O adapter, or, alternatively, they share an I/O adapter by going through an intermediary, such as a hosting partition, hypervisor, or special I/O processor. The inability to share an I/O adapter between OS instances presents several problems, including requiring more I/O slots and adapters per physical server, and high performance I/O adapters may not be fully utilized by a single OS instance. Sharing an I/O adapter through a hosting partition or hypervisor also presents several problems, the most significant being the additional latency added to every I/O operation by going through the intermediary. If the intermediary is in the host (e.g., hosting partition or hypervisor), then the sharing function takes CPU cycles away from the application for each I/O operation. If the intermediary is outboard (e.g., I/O processor), then the sharing function requires an additional card, thus adding cost to the total server solution.
Therefore, it would be advantageous to have a mechanism for the direct sharing of adapter resources among multiple OS instances while the adapter enforces access level validation to the adapter resources.
SUMMARY OF THE INVENTION The present invention provides a method, system, and computer program product for efficient and flexible sharing of adapter resources among multiple operating system instances. Specifically, the present invention provides a mechanism for dynamically allocating virtualized I/O adapter resources, without adding complexity to the adapter implementation. A hypervisor is used to locate available resources in an adapter and allocates an available adapter resource to a given partition. The adapter is notified of the allocation, and the adapter internal structure is updated to reflect the allocation.
BRIEF DESCRIPTION OF THE DRAWINGS The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
FIG. 1 is a diagram of a distributed computer system illustrated in accordance with a preferred embodiment of the present invention;
FIG. 2 is a functional block diagram of a small host processor node in accordance with a preferred embodiment of the present invention;
FIG. 3 is a functional block diagram of a small, integrated host processor node in accordance with a preferred embodiment of the present invention;
FIG. 4 is a functional block diagram of a large host processor node in accordance with a preferred embodiment of the present invention;
FIG. 5 is a diagram illustrating the key elements of the parallel Peripheral Computer Interface (PCI) bus protocol in accordance with a preferred embodiment of the present;
FIG. 6 is a diagram illustrating the key elements of the serial PCI bus protocol in accordance with a preferred embodiment of the present;
FIG. 7 is a diagram illustrating the I/O virtualization functions provided in a host processor node in order to provide virtual host access isolation in accordance with a preferred embodiment of the present invention;
FIG. 8 is a diagram illustrating the control fields used in the PCI bus transaction to identify a virtual adapter or system image in accordance with a preferred embodiment of the present invention;
FIG. 9 is a diagram illustrating the adapter resources that are virtualized in order to allow: an adapter to directly access virtual host resources; allow a virtual host to directly access adapter resources; and allow a non-PCI port on the adapter to access resources on the adapter or host in accordance with a preferred embodiment of the present invention;
FIG. 10 is a diagram illustrating the creation of the three access control levels used to manage a PCI family adapter that supports I/O virtualization in accordance with a preferred embodiment of the present invention;
FIG. 11 is a diagram illustrating how host memory that is associated with a system image is made available to a virtual adapter that is associated with a system image through an LPAR manager in accordance with a preferred embodiment of the present invention;
FIG. 12 is a diagram illustrating how a PCI family adapter allows an LPAR manager to associate memory in the PCI adapter to a system image and its associated virtual adapter in accordance with a preferred embodiment of the present invention;
FIG. 13 is a diagram illustrating one of the options for determining a virtual adapter is associated with an incoming memory address to assure that the functions performed by an incoming PCI bus transaction are within the scope of the virtual adapter that is associated with the memory address referenced in the incoming PCI bus transaction translation in accordance with a preferred embodiment of the present invention;
FIG. 14 is a diagram illustrating one of the options for determining a virtual adapter is associated with a PCI-X or PCI-E bus transaction to assure that the functions performed by an incoming PCI bus transaction are within the scope of the virtual adapter that is associated with the requester bus number, requester device number, and requester function number referenced in the incoming PCI bus transaction translation in accordance with a preferred embodiment of the present invention;
FIG. 15 is a diagram of an example resource allocation in accordance with a preferred embodiment of the present invention;
FIG. 16 is a diagram illustrating the resource context of an internal adapter structure in accordance with a preferred embodiment of the present invention;
FIG. 17 is a diagram illustrating a mapping of adapter internal structures to the bus adapter space in accordance with a preferred embodiment of the present invention;
FIGS. 18A and 18B are diagrams illustrating resource context mappings from memory to adapter address space according to a preferred embodiment of the present invention;
FIG. 19 is a diagram illustrating I/O address decoding in accordance with a preferred embodiment of the present invention; and
FIG. 20 is a flowchart of a process for implementing dynamic resource allocation of a virtualized I/O adapter in accordance with a preferred embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT The present invention applies to any general or special purpose host that uses PCI family I/O adapter to directly attach storage or to attach to a network, where the network consists of endnodes, switches, router and the links interconnecting these components. The network links can be Fibre Channel, Ethernet, InfiniBand, Advanced Switching Interconnect, or a proprietary link that uses proprietary or standard protocols.
With reference now to the figures and in particular with reference toFIG. 1, a diagram of a distributed computer system is illustrated in accordance with a preferred embodiment of the present invention. The distributed computer system represented inFIG. 1 takes the form of a network, such asnetwork120, and is provided merely for illustrative purposes and the embodiments of the present invention described below can be implemented on computer systems of numerous other types and configurations. Two switches (or routers) are shown inside ofnetwork120—switch116 andswitch140.Switch116 connects tosmall host node100 throughport112.Small host node100 also contains a second type ofport104 which connects to a direct attached storage subsystem, such as direct attachedstorage108.
Network120 can also attachlarge host node124 throughport136 which attaches to switch140.Large host node124 can also contain a second type ofport128, which connects to a direct attached storage subsystem, such as direct attachedstorage132.
Network120 can also attach a smallintegrated host node144 which is connected to network120 throughport148 which attaches to switch140. Smallintegrated host node144 can also contain a second type ofport152 which connects to a direct attached storage subsystem, such as direct attachedstorage156.
Turning next toFIG. 2, a functional block diagram of a small host node is depicted in accordance with a preferred embodiment of the present invention.Small host node202 is an example of a host processor node, such assmall host node100 shown inFIG. 1.
In this example,small host node202 includes two processor I/O hierarchies, such as processor I/O hierarchy200 and203, which are interconnected throughlink201. In the illustrative example ofFIG. 2, processor I/O hierarchy200 includesprocessor chip207 which includes one or more processors and their associated caches.Processor chip207 is connected tomemory212 throughlink208. One of the links on processor chip, such aslink220, connects to PCI family I/O bridge228. PCI family I/O bridge228 has one or more PCI family (e.g., PCI, PCI-X, PCI-Express, or any future generation of PCI) links that is used to connect other PCI family I/O bridges or a PCI family I/O adapter, such asPCI family adapter244 andPCI family adapter245, through a PCI link, such aslink232,236, and240.PCI family adapter245 can also be used to connect a network, such asnetwork264, through a link via either a switch or router, such as switch orrouter260.PCI family adapter244 can be used to connect direct attached storage, such as direct attachedstorage252, throughlink248. Processor I/O hierarchy203 may be configured in a manner similar to that shown and described with reference to processor I/O hierarchy200.
With reference now toFIG. 3, a functional block diagram of a small integrated host node is depicted in accordance with a preferred embodiment of the present invention. Smallintegrated host node302 is an example of a host processor node, such as smallintegrated host node144 shown inFIG. 1.
In this example, smallintegrated host node302 includes two processor I/O hierarchies300 and303, which are interconnected throughlink301. In the illustrative example, processor I/O hierarchy300 includesprocessor chip304, which is representative of one or more processors and associated caches.Processor chip304 is connected tomemory312 throughlink308. One of the links on the processor chip, such aslink330, connects to a PCI family adapter, such asPCI family adapter345.Processor chip304 has one or more PCI family (e.g., PCI, PCI-X, PCI-Express, or any future generation of PCI) links that is used to connect either PCI family I/O bridges or a PCI family I/O adapter, such asPCI family adapter344 andPCI family adapter345 through a PCI link, such aslink316,330, and324.PCI family adapter345 can also be used to connect with a network, such asnetwork364, throughlink356 via either a switch or router, such as switch orrouter360.PCI family adapter344 can be used to connect with direct attachedstorage352 throughlink348.
Turning now toFIG. 4, a functional block diagram of a large host node is depicted in accordance with a preferred embodiment of the present invention.Large host node402 is an example of a host processor node, such aslarge host node124 shown inFIG. 1.
In this example,large host node402 includes two processor I/O hierarchies400 and403 interconnected throughlink401. In the illustrative example ofFIG. 4, processor I/O hierarchy400 includesprocessor chip404, which is representative of one or more processors and associated caches.Processor chip404 is connected tomemory412 throughlink408. One of the links, such aslink440, on the processor chip connects to a PCI family I/O hub, such as PCI family I/O hub441. The PCI family I/O hub uses anetwork442 to attach to a PCI family I/O bridge448. That is, PCI family I/O bridge448 is connected to switch orrouter436 throughlink432 and switch orrouter436 also attaches to PCI family I/O hub441 throughlink443.Network442 allows the PCI family I/O hub and PCI family I/O bridge to be placed in different packages. PCI family I/O bridge448 has one or more PCI family (e.g., PCI, PCI-X, PCI-Express, or any future generation of PCI) links that is used to connect with other PCI family I/O bridges or a PCI family I/O adapter, such asPCI family adapter456 andPCI family adapter457 through a PCI link, such aslink444,446, and452.PCI family adapter456 can be used to connect direct attachedstorage476 throughlink460.PCI family adapter457 can also be used to connect withnetwork464 throughlink468 via, for example, either a switch orrouter472.
Turning next toFIG. 5, illustrations of the phases contained in aPCI bus transaction500 and a PCI-X bus transaction520 are depicted in accordance with a preferred embodiment of the present invention.PCI bus transaction500 depicts a conventional PCI bus transaction that forms the unit of information which is transferred through a PCI fabric for conventional PCI. PCI-X bus transaction520 depicts the PCI-X bus transaction that forms the unit of information which is transferred through a PCI fabric for PCI-X.
PCI bus transaction500 shows three phases: anaddress phase508; adata phase512; and aturnaround cycle516. Also depicted is the arbitration fornext transfer504, which can occur simultaneously with the address, data, and turnaround cycle phases. For PCI, the address contained in the address phase is used to route a bus transaction from the adapter to the host and from the host to the adapter.
PCI-X transaction520 shows five phases: anaddress phase528; anattribute phase532; aresponse phase560; adata phase564; and aturnaround cycle566. Also depicted is the arbitration fornext transfer524 which can occur simultaneously with the address, attribute, response, data, and turnaround cycle phases. Similar to conventional PCI, PCI-X uses the address contained in the address phase to route a bus transaction from the adapter to the host and from the host to the adapter. However, PCI-X adds theattribute phase532 which contains three fields that define the bus transaction requester, namely:requester bus number544,requester device number548, and requester function number552 (collectively referred to herein as a BDF). The bus transaction also contains atag540 that uniquely identifies the specific bus transaction in relation to other bus transactions that are outstanding between the requester and a responder. Thebyte count556 contains a count of the number of bytes being sent.
Turning now toFIG. 6, an illustration of the phases contained in a PCI-Express bus transaction is depicted in accordance with a preferred embodiment of the present invention. PCI-E bus transaction600 forms the unit of information which is transferred through a PCI fabric for PCI-E.
PCI-E bus transaction600 shows six phases:frame phase608;sequence number612;header664;data phase668; cyclical redundancy check (CRC)672; andframe phase680. PCI-E header664 contains a set of fields defined in the PCI-Express specification. The requester identifier (ID)field628 contains three fields that define the bus transaction requester, namely:requester bus number684,requester device number688, andrequester function number692. The PCI-E header also containstag652, which uniquely identifies the specific bus transaction in relation to other bus transactions that are outstanding between the requestor and a responder. Thelength field644 contains a count of the number of bytes being sent.
With reference now toFIG. 7, a functional block diagram of a PCI adapter, such asPCI family adapter736, and the firmware and software that run on host hardware (e.g. processor with possibly an I/O hub or I/O bridge), such ashost hardware700, is depicted in accordance with a preferred embodiment of the present invention.
FIG. 7 also shows a logical partitioning (LPAR)manager708 running onhost hardware700.LPAR manager708 may be implemented as a Hypervisor manufactured by International Business Machines, Inc. of Armonk, N.Y.LPAR manager708 can run in firmware, software, or a combination of the two.LPAR manager708 hosts two system image (SI) partitions, such assystem image712 and system image724 (illustratively designatedsystem image1 andsystem image2, respectively). The system image partitions may be respective operating systems running in software, a special purpose image running in software, such as a storage block server or storage file server image, or a special purpose image running in firmware. Applications can run on these system images, such asapplications716,720,728, and732 (illustratively designatedapplication1A,application2,application1B and application3).Applications716 and728 are representative of separate instances of a common application program, and are thus illustratively designated with respective references of “1A” and “1B”. In the illustrative example,application716 and720 run onsystem image712 andapplications728 and732 run onsystem image724. As referred to herein, a virtual host comprises a system image, such assystem image712, or the combination of a system image and applications running within the system image. Thus, two virtual hosts are depicted inFIG. 7.
PCI family adapter736 contains a set of physicaladapter configuration resources740 and physical adapter memory resources744. The physicaladapter configuration resources740 and physical adapter memory resources744 contain information describing the number of virtual adapters thatPCI family adapter736 can support and the physical resources allocated to each virtual adapter. As referred to herein, a virtual adapter is an allocation of a subset of physical adapter resources and virtualized resources, such as a subset of physical adapter resources and physical adapter memory, that is associated with a logical partition, such assystem image712 andapplications716 and720 running onsystem image712, as described more fully hereinbelow.LPAR manager708 is provided a physicalconfiguration resource interface738, and physicalmemory configuration interface742 to read and write into the physical adapter configuration resource and memory spaces during the adapter's initial configuration and reconfiguration. Through the physicalconfiguration resource interface738 and physicalconfiguration memory interface742,LPAR manager708 creates virtual adapters and assigns physical resources to each virtual adapter.LPAR manager708 may use one of the system images, for example a special software or firmware partition, as a hosting partition that uses physicalconfiguration resource interface738 and physicalconfiguration memory interface742 to perform a portion, or even all, of the virtual adapter initial configuration and reconfiguration functions.
FIG. 7 shows a configuration ofPCI family adapter736 configured with two virtual adapters. A first virtual adapter (designated virtual adapter1) comprisesvirtual adapter resources748 andvirtual adapter memory752 that were assigned byLPAR manager708 and that is associated with system image712 (designated system image1). Similarly, a second virtual adapter (designated virtual adapter2) comprisesvirtual adapter resources756 andvirtual adapter memory760 that were assigned byLPAR manager708 tovirtual adapter2 and that is associated with another system image724 (designated system image2). For an adapter used to connect to a direct attached storage, such as direct attachedstorage108,132, or156 shown inFIG. 1, examples of virtual adapter resources may include: the list of the associated physical disks, a list of the associated logical unit numbers, and a list of the associated adapter functions (e.g., redundant arrays of inexpensive disks (RAID) level). For an adapter used to connect to a network, such asnetwork120 ofFIG. 1, examples of virtual adapter resources may include: a list of the associated link level identifiers, a list of the associated network level identifiers, a list of the associated virtual fabric identifiers (e.g. Virtual LAN IDs for Ethernet fabrics, N-port IDs for Fibre Channel fabrics, and partition keys for InfiniBand fabrics), and a list of the associated network layers functions (e.g. network offload services).
AfterLPAR manager708 configures thePCI family adapter736, each system image is allowed to only communicate with the virtual adapters that were associated with that system image byLPAR manager708. As shown inFIG. 7 (by solid lines),system image712 is allowed to directly communicate withvirtual adapter resources748 andvirtual adapter memory752 ofvirtual adapter1.System image712 is not allowed to directly communicate withvirtual adapter resources756 andvirtual adapter memory760 ofvirtual adapter2 as shown inFIG. 7 by dashed lines. Similarly,system image724 is allowed to directly communicate withvirtual adapter resources756 andvirtual adapter memory760 ofvirtual adapter2, and is not allowed to directly communicate withvirtual adapter resources748 andvirtual adapter memory752 ofvirtual adapter1.
With reference now toFIG. 8, a depiction of a component, such as a processor, I/O hub, or I/O bridge800, inside a host node, such assmall host node100,large host node124, or small,integrated host node144 shown inFIG. 1, that attaches a PCI family adapter, such asPCI family adapter804, through a PCI-X or PCI-E link, such as PCI-X or PCI-E Link808, in accordance with a preferred embodiment of the present invention is shown.
FIG. 8 shows that when a system image, such assystem image712 or724, orLPAR manager708 shown inFIG. 7 performs a PCI-X or PCI-E bus transaction, such as host to adapter PCI-X or PCI-E bus transaction812, the processor, I/O hub, or I/O bridge800 that connects to the PCI-X or PCI-E link808 which issues the host to adapter PCI-X or PCI-E bus transaction812 fills in the bus number, device number, and function number fields in the PCI-X or PCI-E bus transaction. The processor, I/O hub, or I/O bridge800 has two options for how to fill in these three fields: it can either use the same bus number, device number, and function number for all software components that use the processor, I/O hub, or I/O bridge800; or it can use a different bus number, device number, and function number for each software component that uses the processor, I/O hub, or I/O bridge800. The originator or initiator of the transaction may be a software component, such assystem image712 or system image724 (or an application running on a system image), orLPAR manager708.
If the processor, I/O hub, or I/O bridge800 uses the same bus number, device number, and function number for all transaction initiators, then when a software component initiates a PCI-X or PCI-E bus transaction, such as host to adapter PCI-X or PCI-E bus transaction812, the processor, I/O hub, or I/O bridge800 places the processor, I/O hub, or I/O bridge's bus number in the PCI-X or PCI-E bus transaction's requesterbus number field820, such asrequestor bus number544 field of the PCI-X transaction shown inFIG. 5 orrequester bus number684 field of the PCI-E transaction shown inFIG. 6. Similarly, the processor, I/O hub, or I/O bridge800 places the processor, I/O hub, or I/O bridge's device number in the PCI-X or PCI-E bus transaction'srequester device number824 field, such asrequester device number548 field shown inFIG. 5 orrequester device number688 field shown inFIG. 6. Finally, the processor, I/O hub, or I/O bridge800 places the processor, I/O hub, or I/O bridge's function number in the PCI-X or PCI-E bus transaction'srequestor function number828 field, such asrequester function number552 field shown inFIG. 5 orrequester function number692 field shown inFIG. 6. The processor, I/O hub, or I/O bridge800 also places in the PCI-X or PCI-E bus transaction the physical or virtual adapter memory address to which the transaction is targeted as shown by adapter resource oraddress816 field inFIG. 8.
If the processor, I/O hub, or I/O bridge800 uses a different bus number, device number, and function number for each transaction initiator, then the processor, I/O hub, or I/O bridge800 assigns a bus number, device number, and function number to the transaction initiator. When a software component initiates a PCI-X or PCI-E bus transaction, such as host to adapter PCI-X or PCI-E bus transaction812, the processor, I/O hub, or I/O bridge800 places the software component's bus number in the PCI-X or PCI-E bus transaction'srequester bus number820 field, such asrequester bus number544 field shown inFIG. 5 orrequester bus number684 field shown inFIG. 6. Similarly, the processor, I/O hub, or I/O bridge800 places the software component's device number in the PCI-X or PCI-E bus transaction'srequester device number824 field, such asrequester device number548 field shown inFIG. 5 orrequester device number688 field shown inFIG. 6. Finally, the processor, I/O hub, or I/O bridge800 places the software component's function number in the PCI-X or PCI-E bus transaction'srequestor function number828 field, such asrequestor function number552 field shown inFIG. 5 orrequester function number692 field shown inFIG. 6. The processor, I/O hub, or I/O bridge800 also places in the PCI-X or PCI-E bus transaction the physical or virtual adapter memory address to which the transaction is targeted as shown by adapter resource oraddress field816 inFIG. 8.
FIG. 8 also shows that when physical orvirtual adapter806 performs PCI-X or PCI-E bus transactions, such as adapter to host PCI-X or PCI-E bus transaction832, the PCI family adapter, such as PCIphysical family adapter804, that connects to PCI-X or PCI-E link808 which issues the adapter to host PCI-X or PCI-E bus transaction832 places the bus number, device number, and function number associated with the physical or virtual adapter that initiated the bus transaction in the requester bus number, device number, andfunction number836,840, and844 fields. Notably, to support more than one bus or device number,PCI family adapter804 must support one or more internal busses (For a PCI-X adapter, see the PCI-X Addendum to the PCI Local Bus Specification Revision 1.0 or 1.0a; for a PCI-E adapter see PCI-Express Base Specification Revision 1.0 or 1.0a the details of which are herein incorporated by reference). To perform this function,LPAR manager708 associates each physical or virtual adapter to a software component running by assigning a bus number, device number, and function number to the physical or virtual adapter. When the physical or virtual adapter initiates an adapter to host PCI-X or PCI-E bus transaction,PCI family adapter804 places the physical or virtual adapter's bus number in the PCI-X or PCI-E bus transaction'srequester bus number836 field, such asrequester bus number544 field shown inFIG. 5 orrequestor bus number684 field shown inFIG. 6 (shown inFIG. 8 as adapter bus number836). Similarly,PCI family adapter804 places the physical or virtual adapter's device number in the PCI-X or PCI-E bus transaction'srequester device number840 field, such asRequestor device Number548 field shown inFIG. 5 orrequestor device number688 field shown inFIG. 6 (shown inFIG. 8 as adapter device number840).PCI family adapter804 places the physical or virtual adapter's function number in the PCI-X or PCI-E bus transaction'srequester function number844 field, such asrequester function number552 field shown inFIG. 5 orrequester function number692 field shown inFIG. 6 (shown inFIG. 8 as adapter function number844). Finally,PCI family adapter804 also places in the PCI-X or PCI-E bus transaction the memory address of the software component that is associated, and targeted by, the physical or virtual adapter in host resource oraddress848 field.
With reference now toFIG. 9, a functional block diagram of a PCI adapter with two virtual adapters depicted in accordance with a preferred embodiment of the present invention is shown. ExemplaryPCI family adapter900 is configured with twovirtual adapters916 and920 (illustratively designatedvirtual adapter1 and virtual adapter2).PCI family adapter900 may contain one (or more) PCI family adapter ports (also referred to herein as an upstream port), such as PCI-X or PCI-E adapter port912 that interface with a host system, such assmall host node100,large host node124, or smallintegrated host node144 shown inFIG. 1.PCI family adapter900 may also contain one (or more) device or network ports (also referred to herein as downstream ports), such asphysical port904 andphysical port908 that interface with a peripheral or network device.
FIG. 9 also shows the types of resources that can be virtualized on a PCI adapter. The resources ofPCI family adapter900 that may be virtualized include processing queues, address and configuration memory, adapter PCI ports, host memory management resources and downstream physical ports, such as device or network ports. In the illustrative example, virtualized resources ofPCI family adapter900 allocated tovirtual adapter916 include, for example, processingqueues924, address andconfiguration memory928, PCI virtual port936 that is a virtualization ofadapter PCI port912, host memory management resources984 (such as memory region registration and memory window binding resources on InfiniBand or iWARP), and virtual device or network ports, such as virtualexternal port932 and virtualexternal port934 that are virtualizations ofphysical ports904 and908. PCI virtual ports and virtual device and network ports are also referred to herein simply as virtual ports. Similarly, virtualized resources ofPCI family adapter900 allocated tovirtual adapter920 include, for example, processingqueues940, address andconfiguration memory944, PCI virtual port952 that is a virtualization ofadapter PCI port912, hostmemory management resources980, and virtual device or network ports, such as virtualexternal port948 and virtualexternal port950 that are respectively virtualizations of respectivephysical ports904 and908.
Turning next toFIG. 10, a functional block diagram of the access control levels on a PCI family adapter, such asPCI family adapter900 shown inFIG. 9, is depicted in accordance with a preferred embodiment of the present invention. The three levels of access are a super-privileged physicalresource allocation level1000, a privileged virtualresource allocation level1008, and anon-privileged level1016.
The functions performed at the super-privileged physicalresource allocation level1000 include but are not limited to: PCI family adapter queries, creation, modification and deletion of virtual adapters, submission and retrieval of work, reset and recovery of the physical adapter, and allocation of physical resources to a virtual adapter instance. The PCI family adapter queries are used to determine, for example, the physical adapter type (e.g. Fibre Channel, Ethernet, iSCSI, parallel SCSI), the functions supported on the physical adapter, and the number of virtual adapters supported by the PCI family adapter. The LPAR manager, such asLPAR manager708 shown inFIG. 7, performs the physicaladapter resource management1004 functions associated with super-privileged physicalresource allocation level1000. However, the LPAR manager may use a system image, for example an I/O hosting partition, to perform the physicaladapter resource management1004 functions.
The functions performed at the privileged virtualresource allocation level1008 include, for example, virtual adapter queries, allocation and initialization of virtual adapter resources, reset and recovery of virtual adapter resources, submission and retrieval of work through virtual adapter resources, and, for virtual adapters that support offload services, allocation and assignment of virtual adapter resources to a middleware process or thread instance. The virtual adapter queries are used to determine: the virtual adapter type (e.g. Fibre Channel, Ethernet, iSCSI, parallel SCSI) and the functions supported on the virtual adapter. A system image, such assystem image712 shown inFIG. 7, performs the privileged virtualadapter resource management1012 functions associated with virtualresource allocation level1008.
Finally, the functions performed at thenon-privileged level1016 include, for example, query of virtual adapter resources that have been assigned to software running at thenon-privileged level1016 and submission and retrieval of work through virtual adapter resources that have been assigned to software running at thenon-privileged level1016. An application, such asapplication716 shown inFIG. 7, performs the virtualadapter access library1020 functions associated withnon-privileged level1016.
Turning next toFIG. 11, a functional block diagram of host memory addresses that are made accessible to a PCI family adapter is depicted in accordance with a preferred embodiment of the present invention.PCI family adapter1101 is an example ofPCI family adapter900 that may have virtualized resources as described above inFIG. 9.
FIG. 11 depicts four different mechanisms by which aLPAR manager708 can associate host memory to a system image and to a virtual adapter. Once host memory has been associated with a system image and a virtual adapter, the virtual adapter can then perform DMA write and read operations directly to the host memory.System images1108 and1116 are examples of system images, such assystem images712 and724 described above with reference toFIG. 7, that are respectively associated withvirtual adapters1104 and1112.Virtual adapters1104 and1112 are examples of virtual adapters, such asvirtual adapters916 and920 described above with reference toFIG. 9, that comprise respective allocations of virtual adapter resources and virtual adapter memory.
The first exemplary mechanism thatLPAR manager708 can use to associate and make available host memory to a system image and to one or more virtual adapters is to write into the virtual adapter's resources a systemimage association list1122.Virtual adapter resources1120 contains a list of PCI bus addresses, where each PCI bus address in the list is associated by the platform hardware to the starting address of a system image (SI) page, such asSI1page11128 throughSI1page N1136 allocated tosystem image1108.Virtual adapter resources1120 also contains the page size, which is equal for all the pages in the list. At initial configuration, and during reconfigurations,LPAR manager708 loads systemimage association list1122 intovirtual adapter resources1120. The systemimage association list1122 defines the set of addresses thatvirtual adapter1104 can use in DMA write and read operations. After the systemimage association list1122 has been created,virtual adapter1104 must validate that each DMA write or DMA read requested bysystem image1108 is contained within a page in the systemimage association list1122. If the DMA write or DMA read requested bysystem image1108 is contained within a page in the systemimage association list1122, thenvirtual adapter1104 may perform the operation. Otherwisevirtual adapter1104 is prohibited from performing the operation. Alternatively, thePCI family adapter1101 may use a special, LPAR manager-style virtual adapter (rather than virtual adapter1104) to perform the check that determines if a DMA write or DMA read requested bysystem image1108 is contained within a page in the systemimage association list1122. In a similar manner, virtual adapter1112 associated withsystem image1116 validates DMA write or read requests submitted bysystem image1116. Particularly, virtual adapter1112 provides validation for DMA read and write requests fromsystem image1116 by determining whether the DMA write or read request is in a page in system image association list (configured in a manner similarly to system image association list1122) associated with system image pages ofsystem image1116.
The second mechanism thatLPAR manager708 can use to associate and make available host memory to a system image and to one or more virtual adapters is to write a starting page address and page size into systemimage association list1122 in the virtual adapter's resources. For example,virtual adapter resources1120 may contain a single PCI bus address that is associated by the platform hardware to the starting address of a system image page, such asSI1Page11128. Systemimage association list1122 invirtual adapter resources1120 also contains the size of the page. At initial configuration, and during reconfigurations,LPAR manager708 loads the page size and starting page address into systemimage association list1122 into thevirtual adapter resources1120. The systemimage association list1122 defines the set of addresses thatvirtual adapter1104 can use in DMA write and read operations. After the systemimage association list1122 has been created,virtual adapter1104 validates whether each DMA write or DMA read requested bysystem image1108 is contained within a page in systemimage association list1122. If the DMA write or DMA read requested bysystem image1108 is contained within a page in the systemimage association list1122, thenvirtual adapter1104 may perform the operation. Otherwise,virtual adapter1104 is prohibited from performing the operation. Alternatively, thePCI family adapter1101 may use a special, LPAR manager-style virtual adapter (rather than virtual adapter1104) to perform the check that determines if a DMA write or DMA read requested bysystem image1108 is contained within a page in the systemimage association list1122. In a similar manner, virtual adapter1112 associated withsystem image1116 may validate DMA write or read requests submitted bysystem image1116. Particularly, a system image association list similar to systemimage association list1122 may be associated with virtual adapter1112. The system image association list associated with virtual adapter1112 is loaded with a page size and starting page address of a system image page ofsystem image1116 associated with virtual adapter1112. The system image association list associated with virtual adapter1112 thus provides a mechanism for validation of DMA read and write requests fromsystem image1116 by determining whether the DMA write or read request is in a page in a system image association list associated with system image pages ofsystem image1116.
The third mechanism thatLPAR manager708 can use to associate and make available host memory to a system image and to one or more virtual adapters is to write into the virtual adapter's resources a system imagebuffer association list1154. InFIG. 11,virtual adapter resources1150 contains a list of PCI bus address pairs (starting and ending address), where each pair of PCI bus addresses in the list is associated by the platform hardware to a pair (starting and ending) of addresses of a system image buffer, such asSI2Buffer11166 throughSI2Buffer N1180 allocated tosystem image1116. At initial configuration, and during reconfigurations,LPAR manager708 loads system imagebuffer association list1154 into thevirtual adapter resources1150. The system imagebuffer association list1154 defines the set of addresses that virtual adapter1112 can use in DMA write and read operations. After the system imagebuffer association list1154 has been created, virtual adapter1112 validates whether each DMA write or DMA read requested bysystem image1116 is contained within a buffer in system imagebuffer association list1154. If the DMA write or DMA read requested bysystem image1116 is contained within a buffer in the system imagebuffer association list1154, then virtual adapter1112 may perform the operation. Otherwise, virtual adapter1112 is prohibited from performing the operation. Alternatively, thePCI family adapter1101 may use a special, LPAR manager-style virtual adapter (rather than virtual adapter1112) to perform the check that determines if DMA write or DMA read operations requested bysystem image1116 is contained within a buffer in the system imagebuffer association list1154. In a similar manner,virtual adapter1104 associated withsystem image1108 may validate DMA write or read requests submitted bysystem image1108. Particularly,virtual adapter1104 provides validation for DMA read and write requests fromsystem image1108 by determining whether the DMA write or read requested bysystem image1108 is contained within a buffer in a buffer association list that contains PCI bus starting and ending address pairs in association with system image buffer starting and ending address pairs of buffers allocated tosystem image1108 in a manner similar to that described above forsystem image1116 and virtual adapter1112.
The fourth mechanism thatLPAR manager708 can use to associate and make available host memory to a system image and to one or more virtual adapters is to write into the virtual adapter's resources a single starting and ending address in system imagebuffer association list1154. In this implementation,virtual adapter resources1150 contains a single pair of PCI bus starting and ending address that is associated by the platform hardware to a pair (starting and ending) of addresses associated with a system image buffer, such asSI2Buffer11166. At initial configuration, and during reconfigurations,LPAR manager708 loads the starting and ending addresses ofSI2buffer11166 into the system imagebuffer association list1154 invirtual adapter resources1150. The system imagebuffer association list1154 then defines the set of addresses that virtual adapter1112 can use in DMA write and read operations. After the system imagebuffer association list1154 has been created, virtual adapter1112 validates whether each DMA write or DMA read requested bysystem image1116 is contained within the system imagebuffer association list1154. If the DMA write or DMA read requested bysystem image1116 is contained within system imagebuffer association list1154, then virtual adapter1112 may perform the operation. Otherwise, virtual adapter1112 is prohibited from performing the operation. Alternatively, thePCI family adapter1101 may use a virtual adapter1150) to perform the check that determines if DMA write or DMA read requested bysystem image1116 is contained within a page system imagebuffer association list1154. In a similar manner,virtual adapter1104 associated withsystem image1108 may validate DMA write or read requests submitted bysystem image1108. Particularly,virtual adapter1104 provides validation for DMA read and write requests fromsystem image1108 by determining whether the DMA write or read requested bysystem image1108 is contained within a buffer in a buffer association list that contains a single PCI bus starting and ending address in association with a system image buffer starting and ending address allocated tosystem image1108 in a manner similar to that described above forsystem image1116 and virtual adapter1112.
Turning next toFIG. 12, a functional block diagram of a PCI family adapter configured with memory addresses that are made accessible to a system image is depicted in accordance with a preferred embodiment of the present invention.
FIG. 12 depicts four different mechanisms by which a LPAR manager can associate PCI family adapter memory to a virtual adapter, such asvirtual adapter1204, and to a system image, such assystem image1208. Once PCI family adapter memory has been associated to a system image and a virtual adapter, the system image can then perform Memory Mapped I/O write and read (i.e., store and load) operations directly to the PCI family adapter memory.
A notable difference between the system image and virtual adapter configuration shown inFIG. 11 andFIG. 12 exists. In the configuration shown inFIG. 11,PCI family adapter1101 only holds a list of host addresses that do not have any local memory associated with them. If the PCI family adapter supports flow-through traffic, then data arriving on an external port can directly flow through the PCI family adapter and be transferred, through DMA writes, directly into these host addresses. Similarly, if the PCI family adapter supports flow-through traffic, then data from these host addresses can directly flow through the PCI family adapter and be transferred out of an external port. Accordingly,PCI family adapter1101 shown inFIG. 11 does not include local adapter memory and thus is unable to initiate a DMA operation. On the other hand,PCI family adapter1201 shown inFIG. 12 has local adapter memory that is associated with the list of host memory addresses.PCI family adapter1201 can initiate, for example, DMA writes from its local memory to the host memory or DMA reads from the host memory to its local memory. Similarly, the host can initiate, for example, Memory Mapped I/O writes from its local memory to the PCI family adapter memory or Memory Mapped I/O reads from the PCI family adapter memory to the host's local memory.
The first and second mechanisms that LPARmanager708 can use to associate and make available PCI family adapter memory to a system image and to a virtual adapter is to write into the PCI family adapter's physical adapter memory translation table1290 a page size and the starting address of one (first mechanism) or more (second mechanism) pages. In this case all pages have the same size. For example,FIG. 12 depicts a set of pages that have been mapped betweensystem image1208 andvirtual adapter1204. Particularly,SI1Page11224 throughSI1Page N1242 ofsystem image1208 are mapped (illustratively shown by interconnected arrows) to virtual adapter memory pages1224-1232 ofphysical adapter1201 local memory. Forsystem image1208, all associated pages1224-1242 in the list have the same size. At initial configuration, and during reconfigurations,LPAR manager708 loads the PCI family adapter's physical adapter memory translation table1290 with the page size and the starting address of one or more pages. The physical adapter memory translation table1290 then defines the set of addresses thatvirtual adapter1204 can use in DMA write and read operations. After physical adapter memory translation table1290 has been created, PCI family adapter1201 (or virtual adapter1204) validates that each DMA write or DMA read requested bysystem image1208 is contained in the physical adapter memory translation table1290 and is associated withvirtual adapter1204. If the DMA write or DMA read requested bysystem image1208 is contained in the physical adapter memory translation table1290 and is associated withvirtual adapter1204, thenvirtual adapter1204 may perform the operation. Otherwise,virtual adapter1204 is prohibited from performing the operation. The physical adapter memory translation table1290 also defines the set of addresses thatsystem image1208 can use in Memory Mapped I/O (MMIO) write and read operations. After physical adapter memory translation table1290 has been created, PCI family adapter1201 (or virtual adapter1204) validates whether the Memory Mapped I/O write or read requested bysystem image1208 is contained in the physical adapter memory translation table1290 and is associated withvirtual adapter1204. If the MMIO write or MMIO read requested bysystem image1208 is contained in the physical adapter memory translation table1290 associated withvirtual adapter1204, thenvirtual adapter1204 may perform the operation. Otherwisevirtual adapter1204 is prohibited from performing the operation. It should be understood that in the present example, other system images and associated virtual adapters, e.g.,system image1216 andvirtual adapter1212, are configured in a similar manner for PCI family adapter1201 (or virtual adapter1212) validation of DMA operations and MMIO operations requested bysystem image1216.
The third and fourth mechanisms that LPARmanager708 can use to associate and make available PCI family adapter memory to a system image and to a virtual adapter is to write into the PCI family adapter's physical adapter memory translation table1290 one (third mechanism) or more (fourth mechanism) buffer starting and ending addresses (or starting address and length). In this case, the buffers may have different sizes. For example,FIG. 12 depicts a set of varying sized buffers that have been mapped betweensystem image1216 andvirtual adapter1212. Particularly,SI2Buffer11244 throughSI2Buffer N1248 ofsystem image1216 are mapped to virtual adapter buffers1258-1274 ofvirtual adapter1212. Forsystem image1216, the buffers in the list have different sizes. At initial configuration, and during reconfigurations,LPAR manager708 loads the PCI family adapter's physical adapter memory translation table1290 with the starting and ending address (or starting address and length) of one or more pages. The physical adapter memory translation table1290 then defines the set of addresses thatvirtual adapter1212 can use in DMA write and read operations. After physical adapter memory translation table1290 has been created, PCI family adapter1201 (or virtual adapter1212) validates that each DMA write or DMA read requested bysystem image1216 is contained in the physical adapter memory translation table1290 and is associated withvirtual adapter1212. If the DMA write or DMA read requested bysystem image1216 is contained in the physical adapter memory translation table1290 and is associated withvirtual adapter1212, thenvirtual adapter1212 may perform the operation. Otherwise,virtual adapter1212 is prohibited from performing the operation. The physical adapter memory translation table1290 also defines the set of addresses thatsystem image1216 can use in Memory Mapped I/O (MMIO) write and read operations. After physical adapter memory translation table1290 has been created, PCI family adapter1201 (or virtual adapter1212) validates whether a MMIO write or read requested bysystem image1216 is contained in the physical adapter memory translation table1290 and is associated withvirtual adapter1212. If the MMIO write or MMIO read requested bysystem image1216 is contained in the physical adapter memory translation table1290 and is associated withvirtual adapter1212, thenvirtual adapter1212 may perform the operation. Otherwisevirtual adapter1212 is prohibited from performing the operation. It should be understood that in the present example, other system images and associated virtual adapters, e.g.,system image1208 and associatedvirtual adapter1204, are configured in a similar manner for PCI family adapter1201 (or virtual adapter1204) validation of DMA operations and MMIO operations requested bysystem image1216.
With reference next toFIG. 13, a functional block diagram of a PCI family adapter and a physical address memory translation table, such as a buffer table or a page table, is depicted in accordance with a preferred embodiment of the present invention.
FIG. 13 also depicts four mechanisms for how an address referenced in an incomingPCI bus transaction1304 can be used to look up the virtual adapter resources (including the local PCI family adapter memory address that has been mapped to the host address), such asvirtual adapter resources1394 or1398, associated with the memory address.
The first mechanism is to compare the memory address of incomingPCI bus transaction1304 with each row ofhigh address cell1316 andlow address cell1320 in buffer table1390.High address cell1316 andlow address cell1320 respectively define an upper and lower address of a range of addresses associated with a corresponding virtual or physical adapter identified inassociation cell1324. If incomingPCI bus transaction1304 has an address that is lower than the contents ofhigh address cell1316 and that is higher than the contents oflow address cell1320, then incomingPCI bus transaction1304 is within the high address and low address cells that are associated with the corresponding virtual adapter identified inassociation cell1324. In such a scenario, the incomingPCI bus transaction1304 is allowed to be performed on the matching virtual adapter. Alternatively, if incomingPCI bus transaction1304 has an address that is not between the contents ofhigh address cell1316 and the contents oflow address cell1320, then completion or processing of incomingPCI bus transaction1304 is prohibited. The second mechanism is to simply allow a single entry in buffer table1390 per virtual adapter.
The third mechanism is to compare the memory address of incomingPCI bus transaction1304 with each row of page startingaddress cell1322 and with each row of page startingaddress cell1322 plus the page size in page table1392. If incomingPCI bus transaction1304 has an address that is higher than or equal to the contents of page startingaddress cell1322 and lower than page startingaddress cell1322 plus the page size, then incomingPCI bus transaction1304 is within a page that is associated with a virtual adapter. Accordingly, incomingPCI bus transaction1304 is allowed to be performed on the matching virtual adapter. Alternatively, if incomingPCI bus transaction1304 has an address that is not within the contents of page startingaddress cell1322 and page startingaddress cell1322 plus the page size, then completion of incomingPCI bus transaction1304 is prohibited. The fourth mechanism is to simply allow a single entry in page table1392 per virtual adapter.
With reference next toFIG. 14, a functional block diagram of a PCI family adapter and a physical address memory translation table, such as a buffer table, a page table, or an indirect local address table, is depicted in accordance with a preferred embodiment of the present invention.
FIG. 14 also depicts several mechanisms for how a requester bus number, such as host bus number1408, a requester device number, such ashost device number1412, and a requester function number, such ashost function number1416, referenced in incomingPCI bus transaction1404 can be used to index into either buffer table1498, page table1494, or indirect local address table1464. Buffer table1498 is representative of buffer table1390 shown inFIG. 13. Page table1490 is representative of page table1392 shown inFIG. 13. Local address table1464 contains a local PCI family adapter memory address that references either a buffer table, such as buffer table1438, or a page table, such as page table1434, that only contains host memory addresses that are mapped to the same virtual adapter.
The requestor bus number, such as host bus number1408, requestor device number, such ashost device number1412, and requester function number, such ashost function number1416, referenced in incomingPCI bus transaction1404 provides an additional check beyond the memory address mappings that were set up by a host LPAR manager.
The present invention provides a method, system, and computer program product for efficient and flexible sharing of adapter resources among multiple operating system instances. The mechanism of the present invention allows for implementing flexible and dynamic resource allocation of virtualized I/O adapters, without adding complexity to the adapter implementation. The present invention separates the operation of adapter resource allocation from adapter resource management. Adapter resource allocation is performed by a hypervisor using a privileged address range, and adapter resource initialization is performed by an OS using an OS non-privileged address range. This flexible and dynamic allocation policy allows the hypervisor to perform adapter resource allocation and track allocated adapter resources.
Each adapter has a limited set of adapter resources. The variety of resources available depends on the adapter. For example, a Remote Direct Memory Access enabled Network Interface Controller (RNIC) I/O adapter providing RDMA capabilities has a wide set of different resources, such as: Queue Pairs (QP), Completion Queues (CQ), Protection Blocks (PB), Translation Tables (TT), etc. However, the I/O adapter still only supports a limited number of QPs, CQs, PBs and size of TT, etc. Since each partition may have its own needs (which are not necessarily the same for different partitions), it is advantageous to share resources according to partition demands rather than sharing all adapter resources in an equal manner, where each partition receives the same number of QPs, CQ, PBs, and size of TT.
The mechanism of the present invention also allows for sharing this variety of resources between different partitions according to the partition demands. Each I/O adapter resource is composed from multiple resource fields. The present invention provides for differentiating between address ranges in the fields, such that each adapter resource field may be accessed via a different address range. In addition, the access permissions depend on the address range through which the adapter resource field has been accessed. Example address ranges on the I/O adapter include a privileged address range, an OS non-privileged range, and an application non-privileged range. These address ranges are set to correspond to access levels in the partitioned server in the illustrative examples. For example, the privileged address range corresponds to the hypervisor access level, an OS non-privileged range corresponds to an OS access level, and an application non-privileged range corresponds to an application access level.
In particular, the hypervisor uses the privileged address range to perform physical resource allocation of adapter resources, and each adapter resource is associated with a particular partition/OS instance. OS non-privileged address range may be used by an operating system instance to access the adapter resources and perform initialization/management of those resources. These resources are owned by the OS instance and were previously allocated by hypervisor and associated with that OS instance. Application non-privileged address range may be used by an application running in the environment of the operating system instance to access the adapter resources owned by that OS instance.
Each PCI adapter resource associated with a particular partition/OS instance is located in the same I/O page. An I/O page refers to the I/O addressing space, typically in 4 KB pages, which is mapped by an OS or hypervisor to the hypervisor, OS or application address space respectively, and then may be accessed by a hypervisor, OS or application. By associating the adapter resources in the same I/O page with the same partition/OS instance, Virtual Memory Manager (VMM) services may be used to protect unauthorized access of one OS instance (and applications running in that OS environment) to the resources allocated for the other OS instance. Access may be controlled by mapping a particular I/O page to be owned by particular partition. Such mapping allows for restricting access to the I/O address space in page granularity, thus allowing access protection.
Once the adapter resource is allocated by the hypervisor for the particular OS instance, this adapter resource can remain in possession of the OS instance. The OS instance owns the allocated adapter resource and may reuse it multiple times. The hypervisor also may reassign an adapter resource by revoking OS ownership of the previously allocated adapter resource and grant ownership on that resource to another OS instance. These adapter resources are allocated and revoked on an I/O page basis.
In addition, an adapter may restrict/differentiate access to the adapter resource context fields for the software components with different privilege levels. Each resource context field has an associated access level, and I/O address ranges are used to identify the access level of the software that accesses the adapter resource context. Each access level (privileged, OS non-privileged and application) has an associated address range in adapter I/O space that can be used to access adapter resources. For example, fields having a privileged access level may be accessed by the I/O transaction initiated through the privileged address range only. Fields having an OS non-privileged access level may be accessed by the I/O transactions initiated through the privileged and OS non-privileged address ranges. In this manner, multiple OS instances may efficiently and flexibly share adapter resources, while the adapter enforces access level control to the adapter resources.
Turning now toFIG. 15, a diagram of an example resource allocation in accordance with a preferred embodiment of the present invention is shown.Hypervisor1502 is responsible for the I/O adapter resource allocation/deallocation, as well as the association of the allocated resource with a particular partition (OS instance). Oncehypervisor1502 allocates a resource, this resource is managed by the OS instance directly without hypervisor involvement.
In particular, the left side ofFIG. 15 shows the steps performed byhypervisor1502 andadapter1504 during the resource allocation sequence. The right side of this figure illustrates the result of the resource allocation.Hypervisor1502 is aware of the capabilities of adapter1504 (e.g., types of resources and the number of resources).Hypervisor1502 determines how many resources to allocate for the given partition, as well as which instances of the given resource should be allocated for that partition. Once the determination is made,hypervisor1502 performs the adapter resource allocation.
For example,hypervisor1502 may keepbitmap1506 of all adapter queue pairs QPs with an indication which QPs are allocated to which particular partition. When hypervisor1502 wants to allocate a new QP(s) to the given partition, the hypervisor first searches for the available (not allocated) QPs inbitmap1506.Hypervisor1502 may useLPAR ID fields1508 and alloc/free fields1510 to locate available QPs inbitmap1506.
Hypervisor1502 then allocates those QPs for the partition by marking the particular LPAR ID field and corresponding alloc/free field inbitmap1506, such asLPAR ID field1512 and alloc/free field1514, as allocated.Hypervisor1502 then notifies adapter1504 (or updates the structure of adapter1504) to reflect that those QPs were allocated for the given partition.Adapter1504 respectively updates itsinternal structure1512 to reflect the allocation, as shown by allocatedresources1514. The process of deallocation or reassignment of adapter resources is similar to the allocation process described above.
Hypervisor1502 is shown inFIG. 15 as storing the bitmap for each type of adapter resources and uses these bitmaps to manage adapter resource allocation. It must be noted that use ofbitmap1506 to keep a trace of adapter resource allocation is an example, andhypervisor1502 may employ any other means for the tracing of allocated and available resources. Additionally, the allocation scheme described above does not assume contiguity of the resources allocated to one partition. In this manner, the allocation described above allows for the simple reassignment of resources from one partition to another.
FIG. 16 is a diagram illustrating the resource context of an internal adapter structure in accordance with a preferred embodiment of the present invention. Each adapter has an associated internal adapter structure, such asinternal structure1512 shown inFIG. 15, which includes a resource context. The present invention requires that the resource context may be accessed by the hypervisor, OS, or applications only via the IO adapter address space (the portion of I/O address space belonging to the adapter) regardless of the location of the adapter resource context (that means that even if adapter resource is located in the system memory, and therefore theoretically can be directly accessed by software, without going through the adapter, we do not allow this in the present invention) in the adapter and/or system memory.
AsFIG. 16 illustrates,adapter resource context1600 is comprised of different fields, such as fields1602-1610. In the illustrative example, each field is associated with attributes, such asaccess permission attribute1612 andprotection permission attribute1614, although each field may have other attributes as well. The present invention employs these protection attributes in the resource context structure to identify the protection level of software that may access each field.Access permission attribute1612 identifies the allowed type of access to the field, such as write-only access toDoorbell field1602. Access permissions may be, for example, read-only, write-only, read-write, and the like.Protection permission attribute1614 identifies the protection level of the software that may access the field.
For example, some of the fields may be accessed only by hypervisor1616 (such as identification of the partition which owns the resource), some fields may be accessed by OS level software1618 (such as the TCP source port number), or some fields may be accessed directly byapplications1620. For example,Doorbell field1602 may be accessed by an application, such asapplication1620, through a write-only type access. In addition, it should be noted that if a field is allowed to be accessed by OS level software, then this field may also be accessed by the hypervisor as well. Likewise, if a field is allowed to be accessed by an application, then this field may also be accessed by the OS and the hypervisor as well.
Turning now toFIG. 17, a diagram illustrating a mapping of adapter internal structures to the bus adapter space in accordance with a preferred embodiment of the present invention is depicted. This figure shows how validation of the access and protection permissions is enforced by the I/O adapter. I/O adapter1702 uses the information from resource context fields1602-1610 inFIG. 16 to enforce the access and protection permissions. I/O adapter1702 contains a dedicated logic which detects and processes the access toadapter address space1704. It must be noted that the view ofinternal adapter structure1706 viaadapter address space1704 does not necessarily reflect the real structure and/or location of the adapter resource context inadapter1702 or system memory.
In particular,FIG. 17 shows defined mappings (address ranges) of adapter resource contexts, such asresource context1600 inFIG. 16, to bus address space (PCI address space)1708. The mechanism of the present invention employs address mapping to identify the protection level (software of different protection level performs access using different address ranges). For example, privileged address range1710 (or parts of it) may be mapped tohypervisor address space1712. Non-privileged OS address range1714 (or parts of it) may be mapped to the address space of eachOS instance1716. Non-privileged application address range1718 (or parts of it) may be mapped toapplication address space1720. These three mappings,mappings1710,1714, and1718, are defined in a manner to permit the access of each resource context from each one of the mappings.Mappings1710,1714, and1718 may be implemented using PCI base registers of three PCI functions of I/O adapter1702, or using any other method. For example, one PCI function may be used to define the privileged address range, another to define the OS non-privileged address range, and the last to define the application non-privileged address range
Mappings1710,1714, and1718 may be accessed by software of the certain protection level. For instance,privileged address range1710 is used by hypervisor1712 to update respective fields of the resource context for an allocated, deallocated, or reassigned resource.Privileged address range1710 is mapped by hypervisor to the hypervisor virtual address space. For example, each adapter resource context contains a partition ID field, such aspartition ID field1610 inFIG. 16, or any other field that can be used to differ resources belonging to one partition from resources belonging to another partition.
Partition ID field1610 may be updated only byhypervisor1712, and is initialized at resource allocation time.Partition ID field1610 identifies the partition that owns this resource, and used byadapter1702 to prevent unauthorized access of the partition resource by another partition. I/O adapter1702 uses the address range validating policy described above to prevent change of the partition ID field by OS-level code.
OS address range1714 is used byOS instance1716 to access the resource context of the resource allocated for this OS instance.OS address range1714 is used to perform resource initialization and management. OS address range1714 (or more exactly its parts—I/O pages) are mapped to the OS virtual space during the resource allocation process.
Application address range1718 is used byapplication1720 running on a particular OS instance, such asOS instance1716, to directly communicate with I/O adapter1702. In this manner,application1720 may avoid OS involvement (context switch) while sending and receiving data (so-called Doorbell ring operations). I/O pages fromapplication address range1718 are mapped to the application address space.
Thus, I/O adapter1702 uses these address ranges/mappings to identify the protection level of software that accesses adapter internal structures. This information together with access and protection attributes associated with each resource context field, allowsadapter1702 to perform access and protection validation.
FIGS. 18A and 18B are diagrams illustrating resource context mappings from memory to adapter address space is depicted according to a preferred embodiment of the present invention.FIGS. 18A and 18B show direct mappings of the resource context to the adapter address space (to each address range), although any mapping may be used to implement the present invention. As different OS instances (partitions) use the same address range to access resource contexts, two conditions should be met to guarantee that one partition cannot access a resource context belonging to another partition. First, the I/O address space should be mapped to the OS/Application address space in units of pages (e.g. 4 KB). Consequently, the hypervisor may allow I/O mapping of only those I/O pages belonging to the given partition. Second, adapter resources which resource contexts are located on the same I/O page should belong to the same partition.
In particular,FIG. 18A illustrates one method in which a resource context may be directly mapped to adapter address space from memory.FIG. 18A illustrates that a resource context may be mapped from memory to adapter address space by mapping the resource context when the resource contexts are located on the same memory page. For example,resource context1802 inmemory page1804 may be mapped to adapter address space. As shown, each resource context inmemory page1804 is mapped to adapter address space using a separate I/O page belonging to a given partition, such as I/O page1806.
FIG. 18B illustrates another method in which a resource context may be directly mapped to adapter address space from memory. AsFIG. 18B shows, a resource context may be mapped from memory to adapter address space by allocating all of the resources whose resource context falls on the same I/O page to the same partition. For example, resource contexts1812-1818 inmemory page1820 may be mapped to adapter address space using the same I/O page, such as I/O page1822.
The resource context fields inFIGS. 18A and 18B may be accessed using software that has knowledge of the structure of the resource context.
An alternative embodiment of the present invention for mapping the resource context to the adapter range is to employ a command-based approach. The command-based approach may be used in contrast with the direct mapping approach utilized inFIGS. 18A and 18B. This command-based approach is alternative implementation of the range-based approach, when it is desirable to hide the internal structure of the adapter resources from the accessing software that is unaware of the internal structure of adapter resources.
In this illustrative approach, the command structure is mapped to the adapter address space using the adapter configuration, I/O address space, or memory address space. Software, such as a hypervisor, OS, or application, writes the command to the command structure. The software may also read the response from the response structure. Access to the command structure may be detected by dedicated adapter logic, which in turn may respond to the commands and update the resource context fields respectively.
For example, it is particularly useful to employ the command-based approach in a hypervisor implementation. Since the hypervisor is responsible for the allocation of adapter resources only, it is not necessary that the hypervisor be aware of the internal structure of the resource context. Rather, the hypervisor just needs to know what types and how many resources are supported by the adapter. For instance, while performing resource allocation, instead of performing a direct update of the resource context (e.g., with LPAR_ID), the hypervisor may request that the adapter allocate a particular instance of the given resource type to the particular partition (e.g., QP #17 is allocated to partition #5). Consequently, the adapter does not need to look for the available QP, since the particular instance of QP is specified by the hypervisor, and the hypervisor is not required to be aware of the internal structure of the QP context.
In particular, this command-based approach is important in situations where the accessing software should not be aware of the internal structure of the adapter resource. For example, this illustrative approach may be used to implement the hypervisor-adapter interface. The command-based interface allows for abstracting the hypervisor code from the adapter internal structure and for using the same hypervisor code to perform resource allocation for different I/O adapters. If the hypervisor is responsible only for resource allocation and resource initialization and management is performed by the OS, a simple querying and allocation protocol may satisfy hypervisor needs. For example, the protocol may include querying the resource types supported by the adapter, the amount of each supported resource, and the command to allocate/deallocate/reassign a resource number of the specified resource to a particular partition number.
Turning now toFIG. 19, a diagram illustrating I/O address decoding is depicted in accordance with a preferred embodiment of the present invention. As resource context mapping to adapter address space as described inFIGS. 18A and 18B is used to access the resource context fields, field attributes in the I/O address are used to validate the access to the resource context.
In particular, software may be used to perform a memory-mapped input-output (MMIO) write toadapter address space1902. Decoding logic within the adapter uses various bits within I/O address1904 to write toadapter address space1902. For example, decoding logic withinadapter1904 may detect an access toadapter address space1902 by matching adapter base address (BA)bits1906 from I/O address1904 andadapter address space1902. The base address is the beginning of the I/O address space that belongs to the adapter. The base address is typically aligned to the size of the adapter address space to allow easy detection/decoding process.
The adapter decoding logic also finds the referred address range (e.g., privileged1908,OS1910, or application1912) ofadapter address space1902 usingAR offs bits1914. AR offs is an offset of the particular address range inside the adapter address space.Cntx offs bits1916 may be used to locate the resource context, such asresource context1918. Cntx offs is a resource context offset inside the particular adapter address range. The adapter decoding logic also usesfield offs bits1920 as an offset to the field inside the resource context, such asfield1922. Field offs is an offset of the particular field in the adapter resource context. In this manner, the adapter may use the address range type and the field attributes to validate access to the resource context.
FIG. 20 is a flowchart of a process for implementing dynamic resource allocation of a virtualized I/O adapter in accordance with a preferred embodiment of the present invention. The flowchart inFIG. 20 is employed to allocate the adapter resources. The process begins with the hypervisor identifying which adapter resources are allocated to a particular partition (step2002). When the hypervisor wants to allocate a new resource to a given partition, the hypervisor searches for available (not allocated) resources (step2004). For example, the hypervisor may search a bitmap for non-allocated resources. The hypervisor then allocates resources for the partition, such as marking them in the bitmap as allocated (step2006). The hypervisor notifies the adapter (or updates the structure of the adapter) to reflect that those resources were allocated to the given partition (step2008). Consequently, the adapter respectively updates its internal structure to reflect the allocation (step2010). The process of deallocation or reassignment of adapter resources is similar to the allocation process described above.
Thus, the present invention provides a method, apparatus, and computer instructions for allowing multiple OS instances to directly share adapter resources. In particular, the present invention provides a mechanism for configuring multiple address spaces per adapter, where each address space is associated to particular access level of the partitioned server and the PCI adapter in conjunction with virtual memory manager (VMM) provides access isolation between the various OS instances sharing the PCI adapter.
The advantages of the present invention should be apparent in view of the detailed description provided above. Existing methods of using PCI adapters either do not allow for sharing of an adapter's resources or, alternatively, the adapter's resources are shared by going through an intermediary, such as a hosting partition, hypervisor, or special I/O processor. However, not sharing adapter resources requires more PCI I/O slots and adapters per physical server, and high performance PCI adapters may not be fully utilized by a single OS instance. Using an intermediary to facilitate PCI adapter sharing adds additional latency to every I/O operation. In contrast, the present invention not only reduces the amount of time and resources needed when using PCI adapters via sharing adapter resources among OS instances, but it also allows the adapter to enforce access control to the adapter resources.
It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media, such as a floppy disk, a hard disk drive, a RAM, CD-ROMs, DVD-ROMs, and transmission-type media, such as digital and analog communications links, wired or wireless communications links using transmission forms, such as, for example, radio frequency and light wave transmissions. The computer readable media may take the form of coded formats that are decoded for actual use in a particular data processing system.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.