CLAIM OF PRIORITYThe present application claims priority from Japanese application JP 2006-283571 filed on Oct. 18, 2006, the content of which is hereby incorporated by reference into this application.
BACKGROUNDA technology disclosed by this invention relates to a power management for a computer system, and more particularly, to a power management for each logical partition in a computer system which includes a storage system.
Logical partitioning technology has been proposed as a method of realizing a high-performance information processing system while suppressing increases in foot print, power consumption, and management cost of a computer system. Logical Partitioning technology realizes multiple virtual machines in a computer system by means of dividing resources of the computer system and allot them to each virtual machine. We can also call a virtual machine a logical partition. By controlling allocation of resources to each of the virtual machines, performance can be guaranteed for each of the virtual machines. An operating system may be installed in each virtual machine. The virtual machines can independently run, stop, do error handling, or the like. Thus, the logical partitioning technology enables a flexible operation of the computer system.
In recent years, in order to prevent from global warming, industrial products need to reduce their power consumption. Because of such a requirement, in computer systems, reduction in power consumption has been becoming an important performance measure.
U.S. 2004/0111596 discloses an exemplary technology of reducing power consumption in an environment where one computer is divided into a plurality of virtual machines. According to this technology, server resources not allocated to any logical partitions are powered off. Resource allocation is controlled so that an amount of resources not allocated to any logical partitions is maximum. Additionally, a physical disk not allocated to any logical partition is powered off.
SUMMARYAs data in computer systems increasing, a technology which interconnects among computer systems and storage systems via dedicated networks is proposed. The dedicated network is called a storage area network (SAN). By connecting the storage system and computer systems which uses the same to the SAN, a plurality of computer systems can easily share a storage system.
When the logical partitioning technology is applied to the computer systems including the SAN, a plurality of virtual machines can share one storage system. In this case, even when one virtual machine shuts down, there is a possibility that the other virtual machines use the storage system. Accordingly, to power off resources of the storage system, a correlation between each virtual machine and resources of the storage system must be managed.
However, U.S. 2004/0111596 discloses no specific mechanism for correlating the storage system and the virtual machines connected to the SAN with each other. Besides, U.S. 2004/0111596 discloses no structure in which one storage system connected to the SAN is shared by a plurality of servers. Thus, even when the virtual machine shuts down, it is impossible to power off the resources of the storage system.
According to a representative embodiment of this invention, there is provided a computer system including: a computer; and a storage system for storing data, in which: the storage system includes a first control module for logically dividing first resources of the storage system and operating the divided first resources as independent virtual storage systems; the computer includes a second control module for logically dividing second resources of the computer and operating the divided second resources as independent virtual machines; the computer system holds first information indicating a correlation among the virtual machines, the virtual storage systems allocated to the virtual machines, and the first resources allocated to the virtual storage systems; and the first control module specifies the first resource allocated to the virtual storage system which is to be powered off based on the first information, and powers off the specified first resource.
According to the embodiment of this invention, it is possible to reduce power consumption by managing a power source of the entire computer system including the storage system based on the logical partition.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A is a block diagram showing a hardware configuration of a computer system according to a first embodiment of this invention.
FIG. 1B is a block diagram showing a hardware configuration of a server system according to the first embodiment of this invention.
FIG. 1C is a block diagram showing a hardware configuration of a channel board in a storage system according to the first embodiment of this invention.
FIG. 1D is a block diagram showing a hardware configuration of a disk board in the storage system according to the first embodiment of this invention.
FIG. 1E is a block diagram showing a hardware configuration of a disk cache board in the storage system according to the first embodiment of this invention.
FIG. 2 is an explanatory diagram of a power supply system of the server system according to the first embodiment of this invention.
FIG. 3 is an explanatory diagram of a power supply system of the storage system according to the first embodiment of this invention.
FIG. 4 is an explanatory diagram of a power supply system of the channel board of the storage system according to the first embodiment of this invention.
FIG. 5 is an explanatory diagram of a power supply system of the disk board in the storage system according to the first embodiment of this invention.
FIG. 6A is a functional block diagram of the computer system according to the first embodiment of this invention.
FIG. 6B is a functional block diagram of the server system according to the first embodiment of this invention.
FIG. 6C is a functional block diagram of the storage system according to the first embodiment of this invention.
FIG. 7 is an explanatory diagram of a server resources control table according to the first embodiment of this invention.
FIG. 8 is an explanatory diagram of a virtual disk control table according to the first embodiment of this invention.
FIG. 9 is an explanatory diagram of a disk address translation table according to the first embodiment of this invention.
FIG. 10 is an explanatory diagram of a storage resources control table according to the first embodiment of this invention.
FIG. 11 is an explanatory diagram of a server power control table according to the first embodiment of this invention.
FIG. 12 is an explanatory diagram of a storage power control table according to the first embodiment of this invention.
FIG. 13 is a flowchart of resource allocation setting processing executed according to the first embodiment of this invention.
FIG. 14 is a flowchart of boot processing of a virtual machine executed according to the first embodiment of this invention.
FIG. 15 is a flowchart of processing executed at the time of cable connection according to the first embodiment of this invention.
FIG. 16 is a flowchart of shutdown processing of the virtual machine according to the first embodiment of this invention.
FIG. 17A is a functional block diagram of a computer system according to a second embodiment of this invention.
FIG. 17B is a functional block diagram of a server system according to the second embodiment of this invention.
FIG. 18 is an explanatory diagram of a power supply system of a channel board of a storage system according to the second embodiment of this invention.
FIG. 19 is a flowchart of resource allocation setting processing executed according to the second embodiment of this invention.
FIG. 20 is a flowchart of boot processing of a virtual machine executed according to the second embodiment of this invention.
FIG. 21 is a flowchart of shutdown processing of the virtual machine executed according to the second embodiment of this invention.
FIG. 22A is a functional block diagram of a computer system according to a third embodiment of this invention.
FIG. 22B is a functional block diagram of a storage system according to the third embodiment of this invention.
FIG. 23 is a flowchart of resource allocation setting processing executed according to the third embodiment of this invention.
FIG. 24 is a flowchart of shutdown processing of a virtual machine executed according to the third embodiment of this invention.
FIG. 25 is a block diagram showing a hardware configuration of a computer system according to a fourth embodiment of this invention.
FIG. 26 is a block diagram showing a hardware configuration of an I/O channel switch according to the fourth embodiment of this invention.
FIG. 27 is an explanatory diagram of a power supply system of the I/O channel switch according to the fourth embodiment of this invention.
FIG. 28 is an explanatory diagram of a routing table held by the I/O channel switch according to the fourth embodiment of this invention.
FIG. 29 is an explanatory diagram of a storage resources control table according to the fourth embodiment of this invention.
FIG. 30 is a flowchart of boot processing of a virtual machine according to the fourth embodiment of this invention.
FIG. 31 is a flowchart of processing executed at the time of cable connection according to the fourth embodiment of this invention.
FIG. 32 is a flowchart of processing executed to create a routing table according to the fourth embodiment of this invention.
FIG. 33 is a flowchart of shutdown processing of the virtual machine executed according to the fourth embodiment of this invention.
FIG. 34 is a functional block diagram of a computer system according to a fifth embodiment of this invention.
FIG. 35 is an explanatory diagram of a server resources control table according to the fifth embodiment of this invention.
FIG. 36 is a flowchart of shutdown processing of a virtual file server system executed according to the fifth embodiment of this invention.
FIG. 37 is an explanatory diagram when two virtual machines operate in a computer system according to a sixth embodiment of this invention.
FIG. 38 is an explanatory diagram when one of the virtual machines shuts down in the computer system according to the sixth embodiment of this invention.
FIG. 39 is an explanatory diagram when resource redundancy is eliminated in the computer system according to the sixth embodiment of this invention.
FIG. 40 is an explanatory diagram executed to cut off power of redundant resources according to the sixth embodiment of this invention.
FIG. 41 is an explanatory diagram of a server system power saving mode table according to the sixth embodiment of this invention.
FIG. 42 is an explanatory diagram of a storage system power saving mode table according to the sixth embodiment of this invention.
FIG. 43 is an explanatory diagram of an input screen used for allocating resources according to the sixth embodiment of this invention.
FIG. 44 is an explanatory diagram of processing executed to cut off power of a disk cache according to a seventh embodiment of this invention.
FIG. 45 is a functional block diagram of a computer system according to an eighth embodiment of this invention.
FIG. 46 is an explanatory diagram of a virtual disk control table according to the eighth embodiment of this invention.
FIG. 47 is a flowchart of processing executed by a storage system when a virtual machine shuts down according to the eighth embodiment of this invention.
FIG. 48 is a flowchart of processing executed by a secondary virtual storage system which receives a shutdown instruction from a primary virtual storage system according to the eighth embodiment of this invention.
FIG. 49 is a functional block diagram of a computer system according to a ninth embodiment of this invention.
FIG. 50 is an explanatory diagram of a disk address translation table according to the ninth embodiment of this invention.
FIG. 51 is an explanatory diagram of a storage resources control table according to the ninth embodiment of this invention.
FIG. 52 is a flowchart of shutdown processing of a virtual machine executed according to the ninth embodiment of this invention.
FIG. 53 is a functional block diagram of a computer system according to a tenth embodiment of this invention.
FIG. 54 is a flowchart of processing to power off physical resources based on a utilization rate according to the tenth embodiment of this invention.
FIG. 55 is a functional block diagram of a computer system according to an eleventh embodiment of this invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSPreferred embodiments of this invention will be described below in detail with reference to the drawings.
FIG. 1A is a block diagram showing a hardware configuration of a computer system according to a first embodiment of this invention.
The computer system of this embodiment includes a server system (0)100A, a server system (1)100B, astorage system120, and acontrol terminal150.
In the server systems (0)100A and (1)100B, application programs (not shown) are operated. A parenthesized numeral such as (0) added after a name of a physical resource (e.g., “server system”) is an identifier of each physical resource. In the description below, in the case of making description common to the server systems (0)100A and (1)100B, those server systems will be generically termed aserver system100. Similarly, when physical resources other than theserver100 are generically termed, they will be described by omitting identifiers of the physical resources and alphabets such as “A”.
Thestorage system120 stores data necessary for operating theserver system100. Thestorage system120 is connected to the server system (0)100A via I/O channels160A and160B, and to the server system (1)100B via I/O channels160C and16D.
Thestorage system120 includes physical resources such as channel boards (0)121A and (1)121B, aninternal network131, disk boards (0)132A and (1)132B, disk cache boards (0)142A and (1)142B, systempower control units146A and146B,batteries147A and147B, and one or more physical disk drives148.
For example, the I/O channels160A,160B,160C, and160D are fibre channels (FC). The I/O channels160A,160B,160C, and160D constitute a storage area network (SAN) for connecting one ormore storage systems120 with one ormore server systems100. The I/O channels160A,160B,160C, and160D are implemented by cables for interconnecting ports (not shown) of an I/O adaptor106 and achannel adaptor129. Referring toFIGS. 1B and 1C, the I/O adaptor106 and thechannel adaptor129 will be described below.
Theinternal network131 interconnects thechannel boards121, the disk boards132, and the disk cache boards142. For example, theinternal network131 may be constituted of a bus or a crossbar switch.
Thecontrol terminal150 is a computer for managing an operation of the entire computer system by executing a virtualmachine control program151. As described below, the virtualmachine control program151 contains computer system management information. Thecontrol terminal150 is connected to theserver systems100 and thestorage system120 via anetwork170.
For example, thenetwork170 is a local area network (LAN), but other types of networks may be employed.
Thephysical disk drive148 is a storage medium for storing data. In general, this storage medium is a magnetic disk, but another type of medium such as an optical disk or a flash memory may be employed. A plurality ofphysical disk drives148 may constitute redundant arrays of independent disks (RAID) so that redundancy can be added to the stored data. As a result, even when troubles occurs in some of thephysical disk drives148, the store data is not lost.
The systempower control units146A and146B control power supply to the physical resources in thestorage system120.
Thebatteries147A and147B are backup power sources for thestorage system120. For example, when a power failure occurs, thebatteries147A and147B supply power to thestorage system120.
A configuration of each device of the computer system shown inFIG. 1A will be described below.
FIG. 1B is a block diagram showing a hardware configuration of theserver system100 according to the first embodiment of this invention.
The server system (0)100A is a computer which includes CPU's (0) and101A and (1)101B, a non-volatile memory (0)102A, a main memory (0)104A, a LAN adaptor (0)105A, I/O adaptors (0)106A and (1)106B, and an I/O controller (0)107A. Additionally, the server system (0)100A includes apower control unit108A for controlling power supply to each of the physical resources.
The CPU's (0)101A and (1)101B execute operation regarding an operating system (OS) and an application program executed by the server system (0)100A. As an example, the server system (0)100A shown inFIG. 1B includes two CPU's (0)101A and (1)101B. However, the server system (0)100A may include only oneCPU101, or three or more CPU's101.
The main memory (0)104A stores programs and data necessary for operating the CPU's (0)101A and (1)101B.
The I/O controller (0)107A interconnects the CPU's (0)101A and (1)101B, the non-volatile memory (0)102A, the main memory (0)104A, the LAN adaptor (0)105A, and the I/O adaptors (0)106A and (1)106B to transfer data and a control signal.
The I/O adaptors (0)106A and (1)106B are connected to thestorage system120 respectively via the I/O channels160A and160B. The I/O adaptors (0)106A and (1)106B transmit data input/output requests to thestorage system120 and receive data stored in thestorage system120.FIG. 1B shows the two I/O adaptors106 for eachserver system100. However, eachserver system100 may include more I/O adaptors106.
These two I/O adaptors (0)106A and (1)106B are operated independently to duplicate a processing system. Accordingly, even when a trouble occurs in one I/O adaptor106, access from the server system (0)100A to thestorage system120 is not stopped.
The LAN adaptor (0)105A is connected to the other server system (1)100B, thestorage system120, and thecontrol terminal150 via thenetwork170. The LAN adaptor (0)105A transfers control information and management information with the devices connected via thenetwork170.
The non-volatile memory (0)102A stores ahypervisor103A. Thehypervisor103A is implemented by processing executed by theCPU101 to implement logical partitions of the physical resources of the server system (0)100A.
Thehypervisor103A is read by executing a dedicated program from thenon-volatile memory102A when power is turned on for the server system (0)100A. Then, thehypervisor103A is started by executing the program to manage resources of the server system (0)100A. In other words, thehypervisor103A is a management program for generating virtual machines which are operated independently by constituting logical partitions in the server system (0)100A.
In place of starting thehypervisor103A when the power is turned on for the server system (0)100A, at the starting of the OS of the server system (0)100A, a virtualization engine may be started and a hypervisor may be configured by the OS and the virtualization engine. In this case, the OS started when the power is turned on for the server system (0)100A reads the virtualization engine to execute it. This virtualization engine may be stored in thenon-volatile memory102A or thestorage system120.
In most cases below, software will be described as a subject of operations. In reality, however, theCPU101 or the like executes software to operate thehypervisor103 or the like.
Thehypervisor103A may be constituted of not software but hardware. For example, the server system (0)100A may include a hypervisor dedicated chip, or theCPU101 may include a hypervisor unit for managing a virtual machine.
As the server system (1)100B is similar in configuration to the server system (0)100A, description thereof will be omitted. Specifically, CPU's (2)101C and (3)101D, a non-volatile memory (1)102B, a main memory (1)104B, a LAN adaptor (1)105B, I/O adaptors (2)106C and (3)106D, an I/O controller (1)107B, and apower control unit108B correspond to the CPU's (0)101A and (1)101B, the non-volatile memory (0)102A, the main memory (0)104A, the LAN adaptor (0)105A, the I/O adaptors (0)106A and (1)106B, the I/O controller (0)107A, and thepower control unit108A, respectively.
FIG. 1C is a block diagram showing a hardware configuration of thechannel board121 in thestorage system120 according to the first embodiment of this invention.
The channel board (0)121A includes physical resources such as CPU's (4)122A and (5)122B, a main memory (2)124A, a non-volatile memory (2)125A, a LAN adaptor (2)127A, an internal network adaptor (0)128A, channel adaptors (0)129A and (1)129B, and an I/O controller (2)130A. Additionally, the channel board (0)121A includes apower control unit123A for controlling power supply to each of the physical resources.
The CPU's (4)122A and (5)122B execute operation regarding various management programs executed by thestorage system120.
The main memory (2)124A stores programs and data necessary for operating the CPU's (4)122A and (5)122B.
The non-volatile memory (2)125A stores astorage hypervisor126A. As in the case of thehypervisor103, thestorage hypervisor126A is implemented by processing executed by theCPU122 to implement logical partitions of the physical resources of thestorage system120.
Thestorage hypervisor126A is implemented by a management program for constituting logical partitions in thestorage system120 and generating virtual storage systems which are operated independently. To implement thestorage hypervisor126A, as in the case of thehypervisor103 of the server system (0)100A, various methods can be employed.
The LAN adaptor (2)127A is connected to theserver system100, thecontrol terminal150, the disk board132, and theother channel board121 via thenetwork170. The LAN adaptor (2)127A transfers a control signal and management information with the devices connected via thenetwork170.
The internal network adaptor (0)128A is connected to the disk board132, the disk cache board142, and theother channel board121 via theinternal network131. The internal network adaptor (0)128A transfers data or the like with each unit connected via theinternal network131.
The channel adaptors (0)129A and (1)129B are connected to the server systems (0)100A and (1)100B respectively via the I/O channels160A and160C. The channel adaptors (0)129A and (1)129B receive a data input/output request from theserver system100 and transmit data stored in thestorage system120.FIG. 1C shows twochannel adaptors129 for eachchannel board121. However, eachchannel board121 may includemore channel adaptors129.
The I/O controller (2)130A interconnects the CPU's (4)122A and (5)122B, the main memory (2)124A, the non-volatile memory (2)125A, the LAN adaptor (2)127A, the internal network adaptor (0)128A, and the channel adaptors (0)129A and (1)129B to transfer data and a control signal.
As the channel board (1)121B is similar in configuration to the channel board (0)121A, description thereof will be omitted. Specifically, CPU's (6)122C and (7)122D, apower control unit123B, a main memory (3)124B, a non-volatile memory (3)125B, a LAN adaptor (3)127B, an internal network adaptor (1)128B, channel adaptors (2)129C and (3)129D, and an I/O controller (3)130B correspond to the CPU's (4)122A and (5)122B, thepower control unit123A, the main memory (2)124A, the non-volatile memory (2)125A, the LAN adaptor (2)127A, the internal network adaptor (0)128A, the channel adaptors (0)129A and (1)129B, and the I/O controller (2)130A, respectively.
The channel boards (0)121A and (1)121B are operated independently to duplicate a processing system. Accordingly, even when a trouble occurs in onechannel board121, thestorage system120 is not stopped. Thestorage system120 may includemore channel boards121.
FIG. 1D is a block diagram showing a hardware configuration of the disk board132 in thestorage system120 according to the first embodiment of this invention.
The disk board (0)132A includes physical resources of a CPU (8)133A, a CPU (9)133B, a main memory (4)135A, a non-volatile memory (4)136A, a LAN adaptor (4)138A, an I/O adaptor (4)139A, an internal network adaptor (2)140A, and an I/O controller (4)141A. Additionally, the disk board (0)132A includes apower control unit134A for controlling power supply to each of the physical resources.
The CPU's (8)133A and (9)133B execute operation regarding various programs executed in thestorage system120.
The main memory (4)135A stores programs and data necessary for operating the CPU's (8)133A and (9)133B.
The non-volatile memory (4)136A stores astorage hypervisor137A. As in the case of thehypervisor103, thestorage hypervisor137A is implemented by processing executed by theCPU133 to implement logical division of the physical resources of thestorage system120.
Thestorage hypervisor137A is implemented by a management program for constituting logical partitions of thestorage system120 and generating a virtual storage system which is operated independently. To implement thestorage hypervisor137A, various methods can be employed as in the case of thehypervisor103A of the server system (0)100A.
The LAN adaptor (4)138A is connected to theserver system100, thecontrol terminal150, thechannel board121, and the other disk board132 via thenetwork170. The LAN adaptor (4)138A transfers a control signal and management information with the devices connected via thenetwork170.
The I/O adaptor (4)139A is connected to thephysical disk drive148. The I/O adaptor (4)139A transmits a data input/output request to thephysical disk drive148 and receive data stored in thephysical disk drive148.
The internal network adaptor (2)140A is connected to thechannel board121, the disk cache board142, and the other disk board132 via theinternal network131. The internal network adaptor (2)140A transfers data or the like with the units connected via theinternal network131.
The I/O controller (4)141A interconnects the CPU's (8)133A and (9)133B, the main memory (4)135A, the non-volatile memory (4)136A, the LAN adaptor (4)138A, the I/O adaptor (4)139A, and the internal network adaptor (2)140A to transfer data and a control signal.
As the disk board (1)132B is similar in configuration to the disk board (0)132A, description thereof will be omitted. Specifically, a CPU (10)133C, a CPU (11)133D, apower control unit134B, a main memory (5)135B, a non-volatile memory (5)136B, a LAN adaptor (5)138B, an I/O adaptor (5)139B, an internal network adaptor (3)140B, and an I/O controller (5)141B correspond to the CPU's (8)133A and (9)133B, thepower control unit134A, the main memory (4)135A, the non-volatile memory (4)136A, the LAN adaptor (4)138A, the I/O adaptor (4)139A, the internal network adaptor (2)140A, and the I/O controller (4)141A, respectively.
The disk boards (0)132A and (1)132B are operated independently to duplicate the processing system. Accordingly, even when a trouble occurs in one disk board132, thestorage system120 is not stopped. Thestorage system120 may include more disk boards132.
FIG. 1E is a block diagram showing a hardware configuration of the disk cache board142 in thestorage system120 according to the first embodiment of this invention.
The disk cache board (0)142A includes adisk cache controller143A, and disk caches (0)144A and (1)144B.
The disk caches (0)144A and (1)144B are memories for temporarily storing data read/written in thephysical disk drive148. By temporarily storing data in the disk cache144, access performance from theserver system100 to thestorage system120 is improved. Thedisk cache controller143A controls writing/reading of data in/from the disk caches (0)144A and (1)144B.
As the disk cache board (1)142B is similar in configuration to the disk cache board (0)142A, description thereof will be omitted. Specifically, adisk cache controller143B, and disk caches (2)144C and (3)144D respectively correspond to thedisk cache controller143A, and the disk caches (0)144A and (1)144B.
FIG. 2 is an explanatory diagram of a power supply system of theserver system100 according to the first embodiment of this invention.
The power supply system of theserver system100 includes anAC power source201, asystem power switch202, apower control unit108, and physical resources (i.e.,CPU101, I/O controller107,main memory104,non-volatile memory102, I/O adaptor106, and LAN adaptor105) for receiving power supply.
TheAC power source201 is a source of power supplied to theserver system100. For example, theAC power source201 may be commercial power supplied from a power company or any other types of AC power source.
Thesystem power switch202 switches inputting (i.e., start of power supplying) and cutting-off (i.e., end of power supplying) of power supplied from theAC power source201 to the seversystem100. When thesystem power switch202 is turned off, power supply to theentire server system100 is completely stopped.
The power supply system of theserver system100 is divided into two areas, i.e., a standbypower supply area204 and a mainpower supply area205. Thepower control unit108 and theLAN adaptor105 belong to the standbypower supply area204, while theCPUs101, the I/O controller107, themain memory104, thenon-volatile memory102, and the I/O adaptors106 belong to the mainpower supply area205.
Power is supplied to the standbypower supply area204 as long as theAC power source201 is operated and thesystem power switch202 is turned on. In other words, the power supplied to the standbypower supply area204 is not cut by thepower control unit108.
On the other hand, power supplied to the mainpower supply area205 is controlled by thepower control unit108. In other words, thepower control unit108 controls inputting and cutting of power to the physical resources belonging to the mainpower supply area205.
Thepower control unit108 can control power supply to the physical resources in response to a request which theLAN adaptor105 receives via thenetwork170. For example, upon reception of a request of supplying power to the mainpower supply area205, theLAN adaptor105 transmits a main power oninterruption signal203 to thepower control unit108. Thepower control unit108 that has received the main power oninterruption signal203 supplies power to the mainpower supply area205. Alternatively, theCPU101 can instruct thepower control unit108 to turn on/off power to the resources such as the I/O adaptor106.
FIG. 3 is an explanatory diagram of a power supply system of thestorage system120 according to the first embodiment of this invention.
The power supply system of thestorage system120 includesAC power sources301A and301B,system power switches302A and302B, systempower control units146A and146B,batteries147A and147B, and physical resources (i.e.,channel board121, disk board132, disk cache board142, and physical disk148) for receiving power supply.
Each of theAC power sources301A and301B are sources of power supplied to thestorage system120. As in the case of theAC power source201 ofFIG. 2, theAC power sources301A and301B may be any types of AC power sources.
Thesystem power switches302A and302B are similar to thesystem power switch202 ofFIG. 2.
The systempower control units146A and146B, thebatteries147A and147B, and the physical resources are as described above referring toFIG. 1A, and thus description thereof will be omitted. Alternatively, thechannel board121A and thedisk board132A can instruct thepower control units146A and146B to turn on/off power to the resources such as thedisk caches144 and148.
As shown inFIG. 3, the power supply system of thestorage system120 includes theAC power source301, the system power switch302, the system power control unit146 and the battery147 respectively by two independently. Accordingly, even when a trouble occurs in one of the two, the other can supply power.
FIG. 4 is an explanatory diagram of a power supply system of thechannel board121 of thestorage system120 according to the first embodiment of this invention.
The power supply system of thechannel board121 includes apower control unit123 and physical resources (i.e.,CPU122, I/O controller130,main memory124,non-volatile memory125,channel adaptor129,internal network adaptor128, and LAN adaptor127) for receiving power supply.
Upon reception of power supplied from the systempower control units146A and146B, thepower control unit123 controls supplying of power to the physical resources.
The power supply system of thechannel board121 is divided into two areas, i.e., a standbypower supply area402 and a mainpower supply area403. Thepower control unit123 and theLAN adaptor127 belong to the standbypower supply area402, while theCPUs122, the I/O controller130, themain memory124, thenon-volatile memory125, thechannel adaptors129, and theinternal network adaptor128 belong to the mainpower supply area403.
As in the case of the standbypower supply area204 shown inFIG. 2, power supplied to the standbypower supply area402 is not cut off by thepower control unit123. In other words, power is supplied to the standbypower supply area402 as longs as power is supplied from at least one of the systempower control units146A and146B.
On the other hand, power supplied to the mainpower supply area403 is controlled by thepower control unit123. In other words, thepower control unit123 controls inputting and cutting of power to the physical resources belonging to the mainpower supply area403.
Thepower control unit123 can control power supply to the physical resources in response to a request which theLAN adaptor127 receives via thenetwork170. For example, upon reception of a request of supplying power to the mainpower supply area403, theLAN adaptor127 transmits a main power oninterruption signal401 to thepower control unit123. Thepower control unit123 that has received the main power oninterruption signal401 supplies power to the mainpower supply area403. Alternatively, theCPU122 can instruct thepower control unit123 to turn on/off power to the resources such as the I/O controller130, themain memory124, thenon-volatile memory125, thechannel adaptor129 and theinternal network adaptor128.
FIG. 5 is an explanatory diagram of a power supply system of the disk board132 of thestorage system120 according to the first embodiment of this invention.
The power supply system of the disk board132 includes apower control unit134 and physical resources (i.e.,CPU133, I/O controller141, main memory135,non-volatile memory136, I/O adaptor139, internal network adaptor140, and LAN adaptor138) for receiving power supply.
Upon reception of power supplied from the systempower control units146A and146B, thepower control unit134 controls supplying of power to the physical resources.
The power supply system of the disk board132 is divided into two areas, i.e., a standby power supply area502 and a mainpower supply area503. Thepower control unit134 and theLAN adaptor138 belong to the standby power supply area502, while theCPUs133, the I/O controller141, the main memory135, thenon-volatile memory136, the I/O adaptor139, and the internal network adaptor140 belong to the mainpower supply area503.
As in the case of the standbypower supply area204 shown inFIG. 2, power supplied to the standby power supply area502 is not cut off by thepower control unit134. In other words, power is supplied to the standby power supply area502 as longs as power is supplied from at least one of the systempower control units146A and146B.
On the other hand, power supplied to the mainpower supply area503 is controlled by thepower control unit134. In other words, thepower control unit134 controls inputting and cutting of power to the physical resources belonging to the mainpower supply area503.
Thepower control unit134 can control power supply to the physical resources in response to a request which theLAN adaptor138 receives via thenetwork170. For example, upon reception of a request of supplying power to the mainpower supply area503, theLAN adaptor138 transmits a main power oninterruption signal501 to thepower control unit134. Thepower control unit134 that has received the main power oninterruption signal501 supplies power to the mainpower supply area503. Alternatively, theCPU133 can instruct thepower control unit134 to turn on/off power to the resources such as the I/O controller141, the main memory135, thenon-volatile memory136, the I/O adaptor139 and the internal network adaptor140.
FIG. 6A is a functional block diagram of the computer system according to the first embodiment of this invention.
Focusing on functions, the server system (0)100A includes a physical layer, a hypervisor layer, and a virtual machine layer.
The physical layer of the server system (0)100A is a physical machine (0)601A which includes server resources such as a CPU, a LAN, and an I/O adaptor.
The hypervisor layer is implemented by thehypervisor103A. The server resources of the physical machine (0)100A are managed by thehypervisor103A.
Bracketed numerals added after the physical resources are identifiers of the physical resources. Bracketed numerals added after virtual resources are identifiers of the virtual resources.
The virtual machine layer includes virtual machines (0)602A and (1)602B. These virtual machines are generated by dividing server resources of the physical machine (0)601A into logical partitions by thehypervisor103A. OS's (0)603A and (1)603B operate in the virtual machines (0)602A and (1)602B. The OS (0)603A executes operation by using server resources allocated to the virtual machine (0)602A. The OS (1)603B executes operation by using server resources allocated to the virtual machine (1)602B.
The server system (1)100B has the same configuration as that of the server system (0)100A. The explanation of the layers of the server system (0)100A can be applied to a physical machine (1)601B, ahypervisor103B, virtual machines (2)602C and (3)602D, and OS's (2)603C and (3)603D of the server system (1)100B.
Focusing on functions, thestorage system120 includes a physical layer, a hypervisor layer, and a virtual storage layer.
The physical layer of thestorage system120 is aphysical storage system611 which includes storage resources such as a physical disk drive, a CPU, a disk cache, a LAN adaptor, and a channel adaptor.
The hypervisor layer is implemented by astorage hypervisor612. Thestorage hypervisor612 corresponds to thestorage hypervisors126A,126B,137A, and137B shown inFIG. 1C.
The virtual storage layer includes virtual storage systems (0)613A and (1)613B. These virtual storage systems are generated by dividing storage resources of thephysical storage system611 into logical partitions by thestorage hypervisor612.
Referring toFIGS. 6B and 6C, the layers of theserver system100 and thestorage system120 will be described below in detail.
A virtualmachine control program151 of thecontrol terminal150 is a program for managing the virtual machine602 in the computer system. In the description below, processing executed by thecontrol terminal150 is actually realized when a CPU (not shown) of thecontrol terminal150 executes the virtualmachine control program151 stored in a memory (not shown).
The virtualmachine control program151 includes at least a storage resources control table621 shown inFIG. 10, a virtual disk control table622 shown inFIG. 8, a server resources control table623 shown inFIG. 7, a server power control table624 shown inFIG. 11, and a storage power control table625 shown inFIG. 12. These tables will be described below in detail.
FIG. 6B is a functional block diagram of theserver system100 according to the first embodiment of this invention.
The physical machine (0)601A of the server system (0)100A includes physical resources of at least CPU's (0)101A and (1)101B, a main memory (0)104A, a LAN adaptor (0)105A, and I/O adaptors (0)106A and (1)106B. These are similar to those described above referring toFIG. 1B. The physical machine (0)601A may further include other physical resources. However, as they are unnecessary for explanation ofFIG. 6B, they are omitted.
Thehypervisor103A includes a server power control table651A, a virtual disk control table652A, and a server resources control table653A. These tables may respectively be similar to the server power control table624, the virtual disk control table622, and the server resources control table623.
According to this embodiment, the tables651A,652A, and653A may hold information only regarding the server system (0)100A. Similarly, tables651B,652B, and653B of the server system (1)100B described below may hold information only regarding the server system (1)100B. Thecontrol terminal150 may collect pieces of information held in the tables of the hypervisors of theserver system100 and thestorage system120 to generate tables621 to625 for holding information regarding the entire computer system.
The virtual machine (0)602A includes virtual I/O adaptors (0)654A and (1)654B,CPU resources655A, andmemory resources656A. Similarly, the virtual machine (1)602B includes virtual I/O adaptors (2)654C and (3)654D,CPU resources655B, andmemory resources656B. These are virtual resources generated by dividing physical resources of the physical machine (0)601A into logical partitions by thehypervisor103A.
A configuration of each layer of the server system (1)100B is similar to that of the server system (0)100A. In other words, the physical machine (1)601B includes physical resources of at least CPU's (2)100C and (3)100D, a main memory (1)104B, a LAN adaptor (1)105B, and I/O adaptors (2)106C and (3)106D. Thehypervisor103B includes a server power control table651B, a virtual disk control table652B, and a server resources control table653B.
The virtual machine (2)602C includes virtual I/O adaptors (4)654E and (5)654F,CPU resources655C, andmemory resources656C. Similarly, the virtual machine (3)602D includes virtual I/O adaptors (6)654G and (7)654H,CPU resources655D, andmemory resources656D. These are virtual resources generated by dividing physical resources of the physical machine (1)601B into logical partitions by thehypervisor103B.
FIG. 6C is a functional block diagram of thestorage system120 according to the first embodiment of this invention.
Thephysical storage system611 of thestorage system120 includes at least CPU's (4)122A to (7)122D, CPU's (8)133A to (11)133D, channel adaptors (0)129A to (3)129D, LAN adaptors (2)127A, (3)127B, (4)138A, and (5)138B, I/O adaptors (4)139A and (5)139B, disk cache boards (0)142A and (1)142B, and aphysical disk drive148. These are similar to those described above referring toFIGS. 1C to 1E. Thephysical storage system611 may include other physical resources. However, as they are unnecessary for explanation ofFIG. 6C, they are omitted.
Thestorage hypervisor612 includes at least a virtual disk control table661, a disk address translation table662, a storage resources control table663, a storage power control table664, and one or morevirtual disks665. The virtual disk control table661, the storage resources control table663, and the storage power control table664 may respectively be similar to the virtual disk control table622, the storage resources control table621, and the storage power control table625 managed by thecontrol terminal150. Thecontrol terminal150 may collect pieces of information held in the tables of thestorage hypervisor612, and may hold the collected pieces of information in the tables of thecontrol terminal150.
Referring toFIG. 9, thevirtual disk662 will be described below in detail.
Thevirtual disk665 is generated by dividing storage resources of thephysical storage system611 into logical partitions by thestorage hypervisor612.
The virtual storage system (0)613A includes at least virtual channel adaptors (0)666A and (1)666B,disk cache resources667A,CPU resources668A,internal network resources669A, and one or morelogical units670. Similarly, the virtual storage system (1)613B includes at least virtual channel adaptors (2)666C and (3)666D,disk cache resources667B,CPU resources668B,internal network resources669B, and one or morelogical units670. These are virtual resources generated by dividing physical resources of thephysical storage system611 into logical partitions by thestorage hypervisor612.
Thelogical unit670 is a logical storage area provided to theserver system100. The OS603 of theserver system100 recognizes eachlogical unit670 as one disk. Thelogical unit670 is correlated with thevirtual disk665.
FIG. 6C shows two virtual storage systems613. However, thestorage system120 may include more virtual storage systems613 (e.g., virtual storage systems (2) and (3) (not shown)).
FIG. 7 is an explanatory diagram of the server resources control table623 according to the first embodiment of this invention, The server resources control table623 holds information for controlling allocation of physical resources to virtual resources of theserver system100.
Specifically, the server resources control table623 includes five columns of avirtual machine number701, aCPU utilization rate702, amemory capacity703, a virtual I/O adaptor number704, and I/O adaptor number705.
An identifier (i.e., parenthesized numeral inFIG. 6A) of the virtual machine602 is registered in thevirtual machine number701.
TheCPU utilization rate702 and thememory capacity703 respectively indicate CPU resources655 and memory resources656 allocated to each virtual machine602. For example, inFIG. 7, “20%” and “512 MB” are respectively registered as theCPU utilization rate702 and thememory capacity703 corresponding to a value “0” of thevirtual machine number701. This means that 20% of the CPU's (0)101A and (1)101B among the physical resources of the physical machine (0)601A is allocated as theCPU resources655A of the virtual machine (0)602A, and a storage area of 512 megabytes (MB) of themain memory104A is allocated as thememory resources656A.
An identifier of the virtual I/O adaptor654 included in each virtual machine602 is registered in the virtual I/O adaptor number704.
In the example ofFIG. 7, “0” and “1” are registered as the virtual I/O adaptor number704 corresponding to the value “0” of thevirtual machine number701. “2” and “3” are registered as the virtual I/O adaptor number704 corresponding to a value “1” of thevirtual machine number701. This means that as shown inFIG. 6B, the virtual machine (0)602A includes virtual I/O adaptors (0)654A and (1)654B, and the virtual machine (1)602B includes virtual I/O adaptors (2)654C and (3)654C.
An identifier of the I/O adaptor106 allocated to each virtual I/O adaptor654 is registered in the I/O adaptor705.
In the example ofFIG. 7, “0”, “1”, “0”, and “1” are registered as I/O adaptors106 corresponding to values “0”, “1”, “2”, and “3” of the virtual I/O adaptor number704, respectively. This means that the I/O adaptor (0)106A is allocated to the virtual I/O adaptors (0)654A and (2)654C, and the I/O adaptor (1)106B is allocated to the virtual I/O adaptors (1)654B and (3)654D. Accordingly, by allocating one physical resource to a plurality of virtual resources, it is possible to provide more virtual resources than physical resources.
InFIG. 7, information regarding the virtual machines (2)602C and (3)602D included in the server system (1)100B is omitted. However, the server resources control table623 holds information regarding all the virtual machines602 in the computer system.
The server resources control table653 of eachserver system100 holds information similar to that shown inFIG. 7. However, the server resources control table653 may hold information only regarding the virtual machine602 included in the seversystem100 which holds the table. For example, the server resources control table653A of the server system (0)100A may hold information only regarding the virtual machines (0)602A and (1)602B.
FIG. 8 is an explanatory diagram of the virtual disk control table622 according to the first embodiment of this invention.
The virtual disk control table622 holds information for controlling allocation of avirtual disk665 to the virtual machine602.
Specifically, the virtual disk control table622 includes four columns of avirtual machine number801, avirtual storage number802, alogical unit number803, and avirtual disk number804.
An identifier of the virtual machine602 is registered in thevirtual machine number801 as in the case of thevirtual machine number701 ofFIG. 7.
An identifier of thevirtual disk665 allocated to the virtual machine602 is registered in thevirtual disk number804.
An identifier of thelogical unit670 correlated with thevirtual disk665 allocated to the virtual machine602 is registered in thelogical unit number803.
An identifier of the virtual storage system613 to which thelogical unit670 allocated to the virtual machine602 belongs is registered in thevirtual storage system802.
In the example ofFIG. 8, the threelogical units670 of the virtual storage system (0)613A are allocated to the virtual machine (0)602A. Identifiers of theselogical units670 are respectively “0”, “1” and “2”. Thelogical units670 are correlated with thevirtual disks665 respectively having identifiers “121”, “122”, and “123”.
In the example ofFIG. 8, one virtual storage system613 is allocated to one virtual machine602. However, a plurality of virtual machines602 may share one virtual storage system613. In this case, a plurality of values (identifiers) are registered asvirtual storage numbers802 corresponding to a value of onevirtual machine number801.
In the example ofFIG. 8, onevirtual disk665 is correlated with onelogical unit670. However, a plurality ofvirtual disks665 may be correlated with onelogical unit670. In this case, a plurality of values (identifiers) are registered asvirtual disk numbers804 corresponding to a value of onelogical unit number803.
InFIG. 8, information regarding the virtual machines (2)602C and (3)602D included in the server system (1)100B is omitted. However, the virtual disk control table622 holds information for managing allocation of thevirtual disk665 regarding all the virtual machines602 in the computer system.
The virtual disk control table652 of eachserver system100 also holds information similar to that shown inFIG. 8. However, the virtual disk control table652 may hold information only regarding the virtual machine602 included in the seversystem100 which holds the table. For example, the virtual disk control table652A of the server system (0)100A may hold information only regarding thevirtual disks665 allocated to the virtual machines (0)602A and (1)602B.
The virtual disk control table661 of thestorage system120 also holds information similar to that shown inFIG. 8. When the computer system includes a plurality ofstorage systems120, the virtual disk control table661 may hold information only regarding thevirtual disk665 of thestorage system120 which holds the table.
FIG. 9 is an explanatory diagram of the disk address translation table662 according to the first embodiment of this invention.
The disk address translation table662 holds information for managing a correlation between thevirtual disk665 and thephysical disk drive148 allocated to thevirtual disk665.
Specifically, the disk address translation table662 includes four columns of avirtual disk number901, avirtual block address902, aphysical disk number903, and aphysical block address904.
An identifier of thevirtual disk665 is registered in thevirtual disk number901.
A virtual block address for uniquely, in eachvirtual disk665, identifying a logical block of thevirtual disk665 is registered in thevirtual block address902.
An identifier of thephysical disk drive148 allocated to thevirtual disk665 is registered in thephysical disk number903.
A physical block address for uniquely, in eachphysical disk drive148, identifying a logical block of thephysical disk drive148 allocated to thevirtual disk665 is registered in thephysical block address904.
The logical block is an area of a predetermined size treated as a management unit of a storage area. For example, when SCSI Standard is applied, the logical block is a storage area of 512 bytes.
In the example ofFIG. 9, “0x00000000” and “0x80000000” are registered as virtual block addresses902 corresponding to a value “121” of thevirtual disk number901. “8” and “9” are respectively registered asphysical disk numbers903 corresponding to the values “0x00000000” and “0x80000000” of the virtual block addresses902. “0x00000000” and “0x00000000” are registered as physical block addresses904 corresponding to the values “0x00000000” and “0x80000000” of the virtual block addresses902.
This means that an area starting from the address “0x00000000” of thephysical disk drive148 having the identifier “8” is allocated to an area starting from the address “0x00000000” of thevirtual disk665 having the identifier “121”, and an area starting from the address “0x00000000” of the physicaldisk block address148 having the identifier “9” is allocated to an area starting from the address “0x80000000” of thevirtual disk665 having the identifier “121”.
FIG. 10 is an explanatory diagram of the storage resources control table621 according to the first embodiment of this invention.
The storage resources control table621 holds information for controlling allocation of a virtual storage system613 to the virtual machine602 and physical resources to virtual resources of thestorage system120.
Specifically, the storage resources control table621 includes ten columns of avirtual machine number1001, a virtualstorage system number1002, avirtual disk number1003, adisk cache capacity1004, aCPU1005 in charge, aninternal bandwidth1006, avirtual channel adaptor1007, achannel adaptor1008, an I/O adaptor1009, and a virtual I/O adaptor1010.
Thevirtual machine number1001, the virtualstorage system number1002, and thevirtual disk number1003 are respectively similar to thevirtual machine number801, thevirtual storage number802, and thevirtual disk number804 ofFIG. 8, and thus description thereof will be omitted.
A capacity of the disk cache144 allocated as a disk cache resource667 of each virtual storage system613 is registered in thedisk cache capacity1004.
Identifiers of the CPU's122 and133 allocated as CPU resources668 of each virtual storage system613 are registered in theCPU1005 in charge.
A bandwidth of theinternal network131 allocated as an internal network resource669 included in each virtual storage system613 is registered in theinternal bandwidth1006.
An identifier of the virtual channel adaptor666 included in each virtual storage system613 is registered in thevirtual channel adaptor1007.
An identifier of thechannel adaptor129 allocated as a virtual channel adaptor666 of each virtual storage system613 is registered in thechannel adaptor1008.
An identifier of the virtual I/O adaptor included in each virtual storage system613 is registered in the virtual I/O adaptor1010.
An identifier of the I/O adaptor139 allocated as a virtual I/O adaptor of each virtual storage system613 is registered in the I/O adaptor1009.
The storage resources control table663 of thestorage system120 also holds information similar to that shown inFIG. 10. However, when the computer system includes a plurality ofstorage systems120, the storage resources control table663 of eachstorage system120 may hold information only regarding thestorage system120. In this case, the storage resources control table621 may hold information regarding all the storage systems collected from all thestorage systems120 of the computer system.
FIG. 11 is an explanatory diagram of the server power control table624 according to the first embodiment of this invention.
The server power control table624 holds information for controlling a power state of each resource of theserver system100.
Specifically, the server power control table624 includes four columns of aresource classification1101, aresource1102, apower state1103, and a usedvirtual machine number1104.
Theresource classification1101 indicates that each resource registered in the server power control table624 is a physical or virtual resource. A value “P” of theresource classification1101 indicates a physical resource, and a value “V” indicates a virtual resource.
Names and identifiers of physical and virtual resources included in theserver system100 are registered in theresource1102.
A value of indicating power state of each resource such as “full on” or “off” is registered in thepower state1103. “Full on” means that power is input to a resource and the resource is fully running. “Off” means cutting of power supplied to the resource. Alternatively, “sleep” indicating partial cutting of power supplied to the resource, “power saving mode” for lowering a performance and suppressing power consumption, or the like may be registered in thepower state1103. “Off” may be distinguished between so-called mechanical off and soft off.
An identifier of the virtual machine602 using each physical resource (i.e., virtual machine602 to which each physical resource is allocated) is registered in the usedvirtual machine number1104. “none (n/a)” is set in the usedvirtual machine number1104 corresponding to the virtual resource.
InFIG. 11, information regarding resources included in the server system (1)100B is omitted. However, the server power control table624 holds information for managing power states of resources included in all theserver systems100 of the computer system.
The server power control table651 of eachserver system100 holds information similar to that shown inFIG. 11. However, the server power control table651 may hold information only regarding resources included in the seversystem100 which holds the table. For example, the server power control table651 of the server system (0)100A may hold information only regarding resources included in the server system (0)100A.
FIG. 12 is an explanatory diagram of the storage power control table625 according to the first embodiment of this invention.
The storage power control table625 holds information for controlling a power state of each resource of thestorage system120.
Specifically, the storage power control table625 includes four columns of aresource classification1201, aresource1202, apower state1203, and a usedvirtual machine number1204.
As in the case of theresource classification1101, theresource classification1201 indicates that each resource registered in the storage power control table625 is a physical or virtual resource.
Names and identifiers of physical and virtual resources included in thestorage system120 are registered in theresource1202.
Thepower state1203 indicates a power state of each resource as in the case of thepower state1103.
An identifier of the virtual storage system613 using each physical resource (i.e., virtual storage system613 to which each physical resource is allocated) is registered in the used virtualstorage system number1204. “none (n/a)” is set in the used virtualstorage system number1204 corresponding to the virtual resource.
The storage power control table664 of thestorage system120 also holds information similar to that shown inFIG. 12. However, when the computer system includes a plurality ofstorage systems120, the storage power control table664 of eachstorage system120 may hold information only regarding thestorage system120.
FIG. 13 is a flowchart showing resource allocation setting processing executed according to the first embodiment of this invention.
First, the user operates thecontrol terminal150 to set physical resources to be allocated to the virtual machine602 and the virtual storage system613 (1301).
Thecontrol terminal150 transmits contents set for the virtual machine602 inStep1301 to the server system100 (1302).
Thehypervisor103 of theserver system100 generates a virtual machine602 according to the setting transmitted from the control terminal150 (1303).
Then, theserver system100 reports setting completion to the control terminal150 (1304).
Next, thecontrol terminal150 transmits contents set for the virtual storage system613 inStep1301 to the storage system120 (1305).
Thestorage hypervisor612 of thestorage system120 generates a virtual storage system613 according to the setting transmitted from the control terminal150 (1306).
Then, thestorage system120 reports setting completion to the control terminal150 (1307).
Thus, the resource allocation setting processing is finished.
FIG. 14 is a flowchart of boot processing of the virtual machine602 executed according to the first embodiment of this invention.
First, the user determines whether the system power switch302 of thestorage system120 has been turned on (1401).
If it is determined inStep1401 that the system power switch302 has been turned on, the process proceeds to Step1403. On the other hand, if it is determined inStep1401 that the system power switch302 has not been turned on (i.e., power has been cut), the user turns on the system power switch302 (1402).
Next, the user determines whether thesystem power switch202 of theserver system100 has been turned on (1403).
If it is determined inStep1403 that thesystem power switch202 has been turned on, the process proceeds to Step1405. On the other hand, if it is determined inStep1403 that thesystem power switch202 has not been turned on, the user turns on the system power switch202 (1404).
Then, the user operates a console terminal to instruct boot of the virtual machine602 (1405). The console terminal is a computer connected to eachserver system100 to operate eachserver system100. According to the first embodiment, thecontrol terminal150 is used as a console terminal.
Thecontrol terminal150 determines whether theCPU122 or the like allocated to the virtual machine602 to be booted among theCPU122 and the like of thestorage system120 has been turned on (1406). For this determination, the storage power control table625 is referred to. In the explanation ofFIG. 14, theCPU122 or the like (i.e., theCPU122 or the CPU133) allocated to the virtual machine602 to be booted is described as arelevant CPU122 or the like. The allocation of theCPU122 or the like is set by processing shown inFIG. 13.
If it is determined inStep1406 that therelevant CPU122 or the like has been turned on, the process proceeds to Step1408. On the other hand, if it is determined inStep1406 that therelevant CPU122 or the like has not been turned on, thecontrol terminal150 transmits an instruction to turn on therelevant CPU122 or the like to the storage system120 (1407). This instruction reaches theLAN adaptors127 and138 of thestorage system120 via thenetwork170. Alternatively, the following method may be employed. When the system power is turned on, at least oneCPU122 is operated. TheCPU122 executes the storage hypervisor137, thecontrol terminal150 transmits a booting command of the relevant CPU to theCPU122, and the storage hypervisor137 boots therelevant CPU122.
Next, therelevant CPU122 or the like turns on each resource of the storage system120 (1408). This power-on is executed by thepower control units123 and134 of thestorage system120 which has received the instruction ofStep1407.
Thestorage system120 executes initial setting processing of the storage system120 (1409).
Then, thecontrol terminal150 determines whether theCPU101 allocated to the virtual machine602 to be booted among the CPU's101 of theserver system100 has been turned on (1410). For this determination, the server power control table624 is referred to. In the explanation ofFIG. 14, theCPU101 allocated to the virtual machine602 to be booted is described as arelevant CPU101. The allocation of theCPU101 is set by the processing shown inFIG. 13.
If it is determined inStep1410 that therelevant CPU101 has been turned on, the process proceeds to Step1412. On the other hand, if it is determined inStep1410 that therelevant CPU101 has not been turned on, thecontrol terminal150 transmits an instruction to turn on therelevant CPU101 to the server system100 (1411). This instruction reaches theLAN adaptor105 of theserver system100 via thenetwork170. Alternatively, the following method may be employed. When the system power is turned on, at least oneCPU105 is operated. TheCPU101 in operation executes thehypervisor103. thecontrol terminal150 transmits a booting command of the relevant CPU to theCPU101 in operation, and thestorage hypervisor103 starts therelevant CPU101.
Next, therelevant CPU101 turns on each resource of the server system100 (1412). This power-on is executed by thepower control unit108 of theserver system100 which has received the instruction ofStep1411.
Next, theserver system100 executes initial setting processing of the server system100 (1413).
Then, theserver system100 detects how a cable constituting the I/O channel160 has been connected (Step1414). This processing may be executed by a method shown inFIG. 15. As a result, it is discovered which of thechannel adaptors129 are connected to which of the I/O adaptors106 by the I/O channel160.
Next, referring toFIG. 10, thecontrol terminal150 creates a storage resources control table621 based on contents set inFIG. 13 and contents detected in Step1414 (1415).
Thus, the boot processing of the virtual machine602 is finished.
FIG. 15 is a flowchart of processing executed at the time of cable connection according to the first embodiment of this invention.
The processing ofFIG. 15 is executed in eachserver system100 and thestorage system120. In the description below, execution by theserver100 is taken as an example, but thestorage system120 executes similar processing.
First, the I/O adaptor106 of theserver system100 detects connection of the cable (i.e., I/O channel160) (1501).
Next, theserver system100 exchanges a physical address with an apparatus (e.g.,storage system120 in the examples ofFIGS. 1A,1B, and1C) communicable via the detected cable (1502). Referring toFIG. 1A, for example, when theserver system100 is connected to thestorage system120, theserver system100 makes an inquiry about a physical address to thestorage system120 to obtain a physical address of thechannel adaptor129 of thestorage system120.
Any physical address may be used for exchanging inStep1502 as long as a port to connect the cable is uniquely specified. For example, when a fibre channel protocol is applied, the physical address may be a world wide name (WWN). Alternatively, when an iSCSI protocol is applied, the physical address may be a MAC address. The I/O adaptor106 notifies the physical address obtained by the exchanging to thehypervisor103.
Thehypervisor103 transmits the obtained cable connection state (i.e., set of physical addresses of mutually connected I/O adaptor106 and channel adaptor129) to thecontrol terminal150 via the network170 (1503).
Thus, the processing executed at the time of cable connection is finished.
FIG. 16 is a flowchart of shutdown processing of the virtual machine602 executed according to the first embodiment of this invention.
The processing ofFIG. 16 is executed when the user powers off one of the virtual machines602. In the description ofFIG. 16, the virtual machine602 which is to be powered off by the user is described as a relevant virtual machine602.
First, the user operates thecontrol terminal150 to instruct shutting-down to the OS603 operating on the relevant virtual machine602 (1601).
Next, the OS603 executes the instructed shutting-down processing (1602).
Then, the OS603 cuts off power of the relevant virtual machine602 (1603). However, at this time, the OS603 only issues a command of cutting of power. In reality, the power has not been cut off.
Next, thehypervisor103 specifies resources used by the relevant virtual machine602 (i.e., resources allocated to the relevant virtual machine) (1604).
Then, thehypervisor103 executes loop processing (1605 to1608) for each resource specified inStep1604. In this case, each resource specified inStep1604 is described as a relevant resource.
InStep1606, thehypervisor103 determines whether the relevant resource is used by the other virtual machine602. In other words, thehypervisor103 determines whether the relevant resource has also been allocated to the virtual machine602 other than the relevant virtual machine602. Specifically, thehypervisor103 refers to the server power control table651 to determine whether a usedvirtual machine number1104 of an entry corresponding to the relevant resource includes an identifier other than that of the relevant virtual machine602.
Now, for example, description will be made of a case where the relevant virtual machine602 is a virtual machine (0)602A and the relevant resource is a CPU (0)101A. In this case, thehypervisor103 refers to an entry where aresource1102 is “CPU (0)” in the server power control table651 to determine whether the usedvirtual machine number1104 of the entry includes a value other than “0”. If the server power control table651 is as shown inFIG. 11, the usedvirtual machine number1104 corresponding to the CPU (0) includes both “0” and “1”. In this case, the CPU (0)101A is also used by the virtual machine (1)602B. Accordingly, inStep1606, it is determined that the relevant resource has also been allocated to the virtual machine602 in addition to the relevant virtual machine602 (YES).
If it is determined inStep1606 that the relevant resource has also been allocated to the virtual machine602 other than the relevant virtual machine, the relevant resource is still used by one of the virtual machines602. Accordingly, the power of the relevant resource cannot be cut off. In this case, the process proceeds to Step1608 without cutting off the power of the relevant resource.
On the other hand, if it is determined inStep1606 that the relevant resource has not been allocated to the virtual machine in addition to the relevant virtual machine602, after the relevant virtual machine602 shuts down, the relevant resource is not used by any virtual machines602. Thus, thehypervisor103 cuts off power of the relevant resource (1607). In other words, thehypervisor103 instructs to cut off power to thepower control unit108, and thepower control unit108 cuts off power of the relevant resource.
When the loop processing has not been finished for all the relevant resources, the process returns to Step1606 to execute processing for remaining relevant resources (1608).
When power of one or more resources is cut off as a result of finishing the loop processing for all the relevant resources, thehypervisor103 reports the cutting of the power of the resources to the control terminal150 (1609).
Next, to reflect the reported cutting of power, thecontrol terminal150 updates the server power control table624 (1610). Further, to reflect the reported cutting of power, thehypervisor103 updates the server power control table651.
Then, thecontrol terminal150 refers to the storage resources control table621 to instruct to cut off power to the virtual storage system613 allocated to the relevant virtual machine602 (1611). At this time, thecontrol terminal150 specifies a virtual storage system613 allocated to the relevant virtual machine602. The virtual storage system613 which is to be powered off by that instruction is described as a relevant virtual storage system613 in the explanation ofFIG. 16.
The virtual storage system613 allocated to the relevant virtual machine602 is specified by referring to thevirtual machine number1001 and the virtualstorage system number1002 of the storage resources control table663 (or storage resources control table621) as shown inFIG. 10. For example, when the relevant virtual machine602 is a virtual machine (0)602A, “0” is registered in the virtualstorage system number1002 corresponding to a value “0” of thevirtual machine number1001. Accordingly, in this case, the virtual storage system (0)613A is specified as a relevant virtual storage system613.
Next, thestorage hypervisor612 specifies resources allocated to the relevant virtual storage system613 (1612). To this end, thestorage hypervisor612 refers to the virtual disk control table661, the disk address translation table662, and the storage resources control table663. By referring to the virtual disk control table661 and the disk address translation table662, aphysical disk drive148 allocated to the relevant virtual storage system613 can be specified. By referring to the storage resources control table663, CPU's122 and133, achannel adaptor129, and an I/O adaptor139 that are allocated to the relevant virtual storage system613 can be specified.
Next, thestorage hypervisor612 executes loop processing (1613 to1616) for each resource specified inStep1612. In this case, each resource specified inStep1612 is described as a relevant resource.
InStep1614, thestorage hypervisor612 determines whether the relevant resource has also been allocated to the virtual storage system613 other than the relevant virtual storage system613. This determination is executed by the same method as that shown inStep1606. Specifically, thestorage hypervisor612 refers to the storage power control table664 to determine whether a used virtualstorage system number1204 of an entry corresponding to the relevant resource includes an identifier other than that of the relevant virtual storage system613.
For example, when the relevant virtual storage system613 is a virtual storage system (0)613A and the relevant resource is a CPU (4)122A, by referring to theresource1202 and the used virtualstorage system number1204 ofFIG. 10, it is determined that the CPU (4)122A is used by both the virtual storage system (0)613A and the virtual storage system (2) (not shown).
Alternatively, thestorage hypervisor612 may execute determination ofStep1614 by referring to the storage resources control table663. For example, by referring to the virtualstorage system number1002 and theCPU1005 in charge shown inFIG. 10, it is discovered that the CPU (4)122A is used by both the virtual storage system (0)613A and the virtual storage system (2) (not shown).
If it is determined inStep1614 that the relevant resource has been allocated to the virtual storage system613 in addition to the relevant virtual storage system613, the relevant resource is still used by one of the virtual storage systems613. Accordingly, the power of the relevant resource cannot be cut off. In this case, the process proceeds to Step1616 without cutting off the power of the relevant resource.
On the other hand, if it is determined inStep1614 that the relevant resource has not been allocated to the virtual storage system613 other than the relevant virtual storage system613, after the relevant virtual storage system613 stops, the relevant resource is not used by any virtual storage systems613. Thus, thestorage hypervisor612 cuts off power of the relevant resource (1615). In other words, the storage hypervisor137 instructs to cut off power to thepower control units123 and134, and thepower control unit123 and134 cuts off power of the relevant resource.
When the loop processing has not been finished for all the relevant resources, the process returns to Step1604 to execute processing for remaining relevant resources (1616).
When power of one or more resources is cut off as a result of finishing the loop processing for all the relevant resources, thestorage hypervisor612 reports the cutting of the power of the resources to the control terminal150 (1617).
Next, to reflect the reported cutting of power, thecontrol terminal150 updates the storage power control table625 (1618). Further, to reflect the reported cutting of power, thestorage hypervisor612 updates the storage power control table664.
Thus, the shutdown processing of the virtual machine602 is finished.
By executing the processing shown inFIG. 16, when the virtual machine602 shuts down, the power of the physical resources allocated to the virtual machine602 alone is cut off. Additionally, when the virtual storage system613 is allocated to the virtual machine602, the power of the physical resources allocated to the virtual storage system613 alone is cut off. As a result, power consumption can be reduced in the entire computer system including theserver system100 and thestorage system120.
Next, a second embodiment of this invention will be described. Differences of the second embodiment from the first embodiment will mainly be described below. Thus, points of the second embodiment not described are similar to those of the first embodiment.
FIG. 17A is a functional block diagram of a computer system according to the second embodiment of this invention.
According to the first embodiment, thecontrol terminal150 holds the information for controlling the entire computer system, and controls the entire computer system. According to the second embodiment, however, one ofserver systems100 holds information for controlling the entire computer system, and controls the entire computer system.
Thus, the computer system of the second embodiment includes nocontrol terminal150 unlike the first embodiment. Instead,console terminals1701A and1701B are respectively connected to the server systems (0)100A and (1)100B. Theconsole terminal1701 is a computer for operating eachserver system100.
Astorage system120, an I/O channel160, and anetwork170 of the second embodiment are similar to those of the first embodiment, and thus description thereof will be omitted.
FIG. 17B is a functional block diagram of theserver system100 according to the second embodiment of this invention.
Theserver system100 of the second embodiment is similar to that of the first embodiment except for tables and programs included in ahypervisor103.
Thehypervisor103 of the second embodiment includes a server power control table1702, a virtual disk control table1703, a server resources control table1704, a storage resources control table1705, a storage power control table1706, and a virtual machine control program1707. The server power control table1702, the virtual disk control table1703, the server resources control table1704, the storage resources control table1705, and the storage power control table1706 hold information for controlling the entire computer system. These tables are respectively similar to the server power control table624, the virtual disk control table622, the server resources control table623, the storage resources control table621, and the storage power control table625 of the first embodiment, and thus description thereof will be omitted.
As in the case of the first embodiment, one of the server systems (0)100A and (1)100B may include ahypervisor103 which includes a server power control table651, a virtual disk control table652, and a server resources control table653. This is because at least one of a plurality ofserver systems100 needs to control the computer system. To increase fault tolerance of the computer system, however, as shown inFIG. 17B, the plurality ofserver systems100 preferably hold information for controlling the entire computer system.
Hereinafter, according to the second embodiment described below, the seversystem100 means one of one ormore server systems100 which hold information for controlling the entire computer system.
A hardware configuration of the computer system of the second embodiment is similar to that of the computer system of the first embodiment except for theconsole terminal1701 disposed in place of thecontrol terminal150 as shown inFIGS. 1A,1B,1C,1D, and1E. Thus, description of the hardware configuration of the computer system of the second embodiment will be omitted.
FIG. 18 is an explanatory diagram of a power supply system of achannel board121 of thestorage system120 according to the second embodiment of this invention.
The power supply system of thechannel board121 of thestorage system120 of the second embodiment is almost similar to that of the first embodiment as shown inFIG. 4. However, a boundary between a main power supply area and a standby power supply area is different. Hereinafter, only differences ofFIG. 18 fromFIG. 4 will be described.
CPUs122, an I/O controller130, amain memory124, anon-volatile memory125, and aninternal network adaptor128 of the second embodiment belong to a mainpower supply area1803. On the other hand,channel adaptors129 and aLAN adaptor127 belong to a standbypower supply area1802. Power of the standbypower supply area1802 is not cut off by apower control unit123. On the other hand, power of the mainpower supply area1803 is controlled by thepower control unit123.
According to the first embodiment, the instruction of supplying/cutting of power is transmitted from thecontrol terminal150 to theserver system100 and thestorage system120 via the network170 (so-called out-band). On the other hand, according to the second embodiment, an instruction of supplying/cutting of power is transmitted from theserver system100 to thestorage system120. In this case, the instruction may be transmitted via a network170 (so-called out-band), or via the I/O channel160 (so-called in-band).
When the instruction of power-on is transmitted via the I/O channel160, thechannel adaptor129 must belong to the standbypower supply area1802 so that the instruction of power-on can be received while the power of the mainpower supply area1803 is cut-off. Upon reception of the instruction of power-on, thechannel adaptor129 transmits a main power-oninterruption signal1801 to thepower control unit123. The main power-oninterruption signal1801 is a signal similar to the main power-oninterruption signal401.
FIG. 19 is a flowchart of resource allocation setting processing executed according to the second embodiment of this invention.
The resource allocation setting processing executed according to the second embodiment is similar to that executed according to the first embodiment shown inFIG. 13 except for some Steps. Differences of the processing ofFIG. 19 from the processing ofFIG. 13 will be described.
Steps1901 to1903 ofFIG. 19 respectively correspond toSteps1301 to1303 ofFIG. 13.Steps1904 to1906 ofFIG. 19 respectively correspond toSteps1305 to1307 ofFIG. 13.
The computer system of the second embodiment includes nocontrol terminal150. Accordingly, inStep1901, the user operates theconsole terminal1701 to set physical resources to be allocated to the virtual machine602 and the virtual storage system613. InStep1902, theconsole terminal1701 transmits set contents to theserver system100. InStep1904, theserver system100 transmits contents set for the virtual storage system613 to thestorage system120. InStep1906, thestorage system120 reports setting completion to theserver system100.
FIG. 20 is a flowchart of boot processing of the virtual machine602 executed according to the second embodiment of this invention.
InFIG. 20, processing ofSteps2001 to2004 is similar to that ofSteps1401 to1404 ofFIG. 14. Thus, description of these Steps will be omitted.
InStep2005, the user operates theconsole terminal1701 to instruct booting of the virtual machine602.
Then, ahypervisor103 determines whether power of a CPU101 (in the explanation ofFIG. 20, described as relevant CPU101) allocated to the virtual machine602 to be booted among CPU's101 of the server system has been turned on (2006). For this determination, a server power control table1702 is referred to. The allocation of theCPU101 is set by processing shown inFIG. 19.
If it is determined inStep2006 that therelevant CPU101 has been turned on, the process proceeds to Step2008. On the other hand, if it is determined inStep2006 that therelevant CPU101 has not been turned on, the hypervisor103 issues an instruction of power-on of therelevant CPU101 to the relevant CPU101 (2007).
Next, therelevant CPU101 turns on each resource of the server system100 (2008).
Then, theserver system100 executes initial setting processing of the server system100 (2009).
Then, thehypervisor103 determines whether theCPU122 or the like (in the explanation ofFIG. 20, described asrelevant CPU122 or the like) allocated to the virtual machine602 to be booted among the CPU's122 and133 of thestorage system120 has been turned on (2010). For this determination, the storage power control table1706 is referred to. The allocation of theCPU122 or the like is set by processing shown inFIG. 19.
If it is determined inStep2010 that therelevant CPU122 or the like has been turned on, the process proceeds to Step2012. On the other hand, if it is determined inStep2010 that therelevant CPU122 or the like has not been turned on, thehypervisor103 transmits an instruction of power-on of therelevant CPU122 or the like to the storage system120 (2011). This instruction reachesLAN adaptors127 and138 of thestorage system120 via thenetwork170.
Next, therelevant CPU122 or the like turns on each resource of the storage system120 (2012).
Then, thestorage system120 executes initial setting processing of the storage system120 (2013).
Then, thehypervisor103 detects how a cable constituting the I/O channel160 has been connected (2014). This processing may be executed by a method shown inFIG. 15. As a result, it is discovered which of thechannel adaptors129 are connected to which of the I/O adaptors106 by the I/O channel160.
Next, the virtual machine control program1707 of thehypervisor103 creates a storage resources control table1705 based on contents set inFIG. 19 and contents detected in Step2014 (2015).
Thus, the boot processing of the virtual machine602 is finished.
InStep2011, the instruction of power-on of theCPU122 or the like is transmitted to thestorage system120 via thenetwork170. It is because data cannot be transmitted via the I/O channel160 before the connection state of the cable is detected inStep2014. On the other hand, after it has been discovered which of the I/O adaptors106 is connected to which of thechannel adaptors129 as a result of detecting the connection state of the cable, theserver system100 can transmit the instruction of supplying/cutting of power to thestorage system120 via the I/O channel160 (i.e., so-called in-band).
FIG. 21 is a flowchart of shutdown processing of the virtual machine602 executed according to the second embodiment of this invention.
The processing ofFIG. 21 is executed when power of one of the virtual machines602 is cut off. In the description ofFIG. 21, the virtual machine602 which is to be powered off by the user is described as a relevant virtual machine602.
First, the user operates theconsole terminal1701 to instruct shutting-down to the OS603 operating on the relevant virtual machine602 (2101).
Steps2102 to2104 are respectively similar toSteps1602 to1604 ofFIG. 16, and thus description thereof will be omitted.
Next, thehypervisor103 executes loop processing (2105 to2109) for each resource specified in Step2104. In this case, each resource specified in Step2104 is described as a relevant resource.
InStep2106, thehypervisor103 determines whether the relevant resource is used by the other virtual machine602. This determination is executed by the same method as that ofStep1606 ofFIG. 16.
If it is determined inStep2106 that the relevant resource has been allocated to the virtual machine602 in addition to the relevant virtual machine602, the relevant resource is still used by one of the virtual machines602. Accordingly, the power of the relevant resource cannot be cut off. In this case, the process proceeds to Step2109 without cutting off the power of the relevant resource.
On the other hand, if it is determined inStep2106 that the relevant resource has not been allocated to the virtual machine602 other than to the relevant virtual machine602, after the relevant virtual machine602 shuts down, the relevant resource is not used by any virtual machines602. Thus, power of the relevant resource can be cut off. However, according to the second embodiment, supplying/cutting of power of theserver system100 and thestorage system120 is controlled by theCPU101 which executes thehypervisor103 of theserver system100. Accordingly, when power of all the CPU's101 of theserver system100 is cut off, it becomes impossible to turn on the CPU's101 any more. Thus, when the relevant resource is the only currently operatingCPU101, the power thereof cannot be cut off.
Thus, if it is determined inStep2106 that the relevant resource has not been allocated to the virtual machine602 other than the relevant virtual machine602, thehypervisor103 determines whether the relevant resource is aCPU101 and the number of currently operating CPU's101 is one (2107).
If it is determined inStep2107 that the relevant resource is aCPU101, and the number of currently operating CPU's101 is 1, the relevant resource is the only currently operatingCPU101. In this case, the process proceeds to Step2109 without cutting off power of the resource.
If it is determined inStep2107 that the relevant resource is not aCPU101 or the number of currently operating CPU's101 is not 1, the relevant resource is not the only currently operatingCPU101. In this case, thehypervisor103 cuts off power of the relevant resource (2108).
When theserver system100 includes a CPU (not shown) which is not a power control target of thehypervisor103, and this CPU is used only for controlling power of each resource, it is not necessary to execute the determination ofStep2107. In this case, when “no” is determined inStep2106,Step2108 is executed.
When the loop processing has not been finished for all the relevant resources, the process returns to Step2106 to execute processing for remaining relevant resources (2109).
When power of one or more resources is cut off as a result of finishing the loop processing for all the relevant resources, thehypervisor103 updates the server power control table1702 to reflect the cutting of the power (2110).
Next, theserver system100 instructs to cut off power to the virtual storage system613 allocated to the relevant computer (2111). The virtual storage system613 which is to be powered off in response to the instruction is described as a relevant virtual storage system613 in the description ofFIG. 21. There are various instruction methods. When the I/O channel160 is a fibre channel, a method in which a “logout” message of a fibre channel protocol is a power cutting-off instruction may be used.
Referring toFIG. 10, the virtual storage system613 allocated to the relevant virtual machine602 is specified by referring to thevirtual machine number1001 and the virtualstorage system number1002 of the storage resources control table663 (or storage resources control table1705).
Processing ofnext Steps2112 to2116 is similar to that ofSteps1612 to1616 ofFIG. 16, and thus description thereof will be omitted.
When power of one or more resources is cut off as a result of finishing the loop processing ofSteps2113 to2116 for all the relevant resources, thestorage hypervisor612 reports the cutting of the power of the resources to the server system100 (2117).
Next, to reflect the reported cutting of power, theserver system100 updates the storage power control table1706 (2118). Further, to reflect the reported cutting of power, thestorage hypervisor612 updates the storage power control table664.
Thus, the shutdown processing of the virtual machine602 is finished.
Next, a third embodiment of this invention will be described. Differences of the third embodiment from the first embodiment will mainly be described below. Thus, points of the third embodiment not described are similar to those of the first embodiment.
FIG. 22A is a functional block diagram of a computer system according to the third embodiment of this invention.
According to the first embodiment, thecontrol terminal150 holds the information for controlling the entire computer system, and controls the entire computer system. According to the second embodiment, one ofserver systems100 holds information for controlling the entire computer system, and controls the entire computer system. However, according to the third embodiment, astorage system120 holds information for controlling the entire computer system, and controls the entire computer system.
Thus, the computer system of the third embodiment includes nocontrol terminal150 unlike the first embodiment. Instead, aconsole terminal2201 is connected to thestorage system120. Theconsole terminal2201 is a computer for operating thestorage system120.
Aserver system100, an I/O channel160, and anetwork170 of the third embodiment are similar to those of the first embodiment, and thus description thereof will be omitted.
FIG. 22B is a functional block diagram of thestorage system120 according to the third embodiment of this invention.
Thestorage system120 of the third embodiment is similar to that of the first embodiment except for tables and programs included in astorage hypervisor612.
Thestorage hypervisor612 of the third embodiment includes a virtual disk control table2202, a disk address translation table2203, a storage resources control table2204, a server resources control table2205, a storage power control table2206, a server power control table2207, a virtualmachine control program2208, and one or morevirtual disks665. The virtual disk control table2202, the storage resources control table2204, the server resources control table2205, the storage power control table2206, and the server power control table2207 hold information for managing the entire computer system. These tables are respectively similar to the virtual disk control table622, the storage resources control table621, the server resources control table623, the storage power control table625, and the server power control table624 of the first embodiment, and thus description thereof will be omitted.
A hardware configuration of the computer system of the third embodiment is similar to that of the computer system of the first embodiment except for theconsole terminal2201 disposed in place of thecontrol terminal150 shown inFIGS. 1A,1B,1C,1D, and1E. Thus, description of the hardware configuration of the computer system according to the third embodiment will be omitted.
FIG. 23 is a flowchart of resource allocation setting processing executed according to the third embodiment of this invention.
The resource allocation setting processing executed according to the third embodiment is similar to that executed according to the first embodiment shown inFIG. 13 except for some Steps. Detailed description of points of the processing ofFIG. 23 similar to those of the processing ofFIG. 13 will be omitted.
Steps2301 to2306 ofFIG. 23 respectively correspond toSteps1301,1305,1306,1302,1303 and1307 ofFIG. 13.
The computer system of the third embodiment includes nocontrol terminal150. Accordingly, inStep2301, the user operates theconsole terminal2201 to set physical resources to be allocated to the virtual machine602 and the virtual storage system613. InStep2302, theconsole terminal2201 transmits set contents to thestorage system120. InStep2303, thestorage system120 generates a virtual storage system613. InStep2304, thestorage system120 transmits contents set for the virtual machine602 to theserver system100. InStep2305, the server system generates a virtual machine602. InStep2306, theserver system100 reports setting completion to thestorage system120.
FIG. 24 is a flowchart of shutdown processing of the virtual machine602 executed according to the third embodiment of this invention.
The processing ofFIG. 24 is executed when the user cuts off power of one of the virtual machines602. In the description ofFIG. 24, the virtual machine602 which is to be powered off by the user is described as a relevant virtual machine602.
First, the user operates theconsole terminal2201 to instruct shutting-down to an OS603 operating on the relevant virtual machine602 (2401). The shutting-down instruction reaches theserver system100 via thestorage system120 and thenetwork170.
Steps2402 to2408 are respectively similar toSteps1602 to1608 ofFIG. 16, and thus description thereof will be omitted.
When power of one or more resources is cut off as a result of finishing the loop processing ofSteps2405 to2408 for all the relevant resources, thehypervisor103 updates the server power control table651 to reflect the cutting of the power (2409).
Then, thehypervisor103 reports the executed cutting of power to the storage system120 (2410). Thestorage system120 updates the server power control table2207 in response to the report.
Next, thestorage system120 instructs to cut off power to the virtual storage system613 allocated to the relevant computer602 (2411). Specifically, thestorage hypervisor612 refers to thevirtual machine number1001 and thevirtual storage number1002 of the storage resources control table2204 to specify a virtual storage system613 allocated to the relevant computer602. The virtual storage system613 specified inStep2411 is a virtual storage system613 which is to be powered off by processing described below. The virtual storage system613 specified inStep2411 is described as a relevant virtual storage system613 in the description ofFIG. 24.
Next, thestorage hypervisor612 specifies resources allocated to the relevant virtual storage system613 (2412). This processing is similar to that ofStep1612 ofFIG. 16.
Next, thestorage hypervisor612 executes loop processing for the resources specified in Step2412 (2413 to2417). The resources specified inStep2412 are described as relevant resources.
InStep2414, thestorage hypervisor612 determines whether the relevant resource has also been allocated to the virtual storage system613 in other than the relevant storage system613. This determination is executed by the same method as that ofStep1614.
If it is determined inStep2414 that the relevant resource has also been allocated to the virtual storage system613 other than the relevant virtual storage system613, the relevant resource is still used by one of the virtual storage systems613. Accordingly, the power of the relevant resource cannot be cut off. In this case, the process proceeds to Step2417 without cutting off the power of the relevant resource.
On the other hand, if it is determined inStep2414 that the relevant resource has not been allocated to the virtual storage system613 other than the relevant virtual storage system613, after the relevant virtual storage system613 shuts down, the relevant resource is not used by any virtual storage systems613. Thus, power of the relevant resource can be cut off. However, according to the third embodiment, supplying/cutting of power of theserver system100 and thestorage system120 is controlled by aCPU122 or the like which executes thestorage hypervisor612 of thestorage system120. Accordingly, when power of all the CPU's122 or the like of thestorage system120 is cut off, it becomes impossible to turn on the CPU's or the like any more. Thus, when the relevant resource is the only currently operatingCPU122 or the like, the power thereof cannot be cut off.
Thus, if it is determined inStep2414 that the relevant resource has not been allocated to the virtual machine602 other than the relevant virtual machine602, thestorage hypervisor612 determines that the relevant resource is aCPU122 or133, and whether the number of currently operating CPU's122 or the like is one (2415).
If it is determined inStep2415 that the relevant resource is aCPU122 or133, and the number of currently operating CPU's122 or the like is 1, the relevant resource is the only currently operatingCPU122 or the like. In this case, the process proceeds to Step2417 without cutting off power of the resource.
If it is determined inStep2415 that the relevant resource is neither aCPU122 nor133, or that the number of currently operating CPU's122 or the like is not 1, the relevant resource is not the only currently operatingCPU122 or the like. In this case, thestorage hypervisor612 cuts off power of the relevant resource (2416).
When thestorage system120 includes a CPU (not shown) which is not a power control target of thestorage hypervisor612, and this CPU is used only for controlling power source of each resource, it is not necessary to execute the determination ofStep2415. In this case, when “no” is determined inStep2414,Step2416 is executed.
When the loop processing has not been finished for all the relevant resources, the process returns to Step2414 to execute processing for remaining relevant resources (2417).
When power of one or more resources is cut off as a result of finishing the loop processing for all the relevant resources, thestorage hypervisor612 updates the storage power control table2206 to reflect the cutting of the power (2418).
Thus, the shutdown processing of the virtual machine602 is finished.
According to the third embodiment, the instruction of supplying/cutting of power is transmitted via thenetwork170. However, the instruction may be transmitted via the I/O channel160.
Next, a fourth embodiment of this invention will be described. Differences of the fourth embodiment from the first embodiment will mainly be described below. Thus, points of the fourth embodiment not described are similar to those of the first embodiment.
FIG. 25 is a block diagram showing a hardware configuration of the computer system according to the fourth embodiment of this invention.
The hardware configuration of the computer system of the fourth embodiment is similar to that of the first embodiment shown inFIG. 1 except for interconnection between aserver system100 and astorage system120 via an I/O channel160 and an I/O channel switch2501. Thus, description of components other than the I/O channel160 and the I/O channel switch2501 will be omitted.
An I/O channel160A connects an I/O adaptor (0)106A with an I/O channel switch (0)2501A. An I/O channel160B connects an I/O adaptor (1)106B with an I/O channel switch (1)2501B. An I/O channel160C connects an I/O adaptor (2)106C with the I/O channel switch (0)2501A. An I/O channel160D connects an I/O adaptor (3)106D with the I/O channel switch (1)2501B.
An I/O channel160E connects the I/O channel switch (0)2501A with a channel adaptor (0)129A. An I/O channel160F connects the I/O channel switch (0)2501A with a channel adaptor (2)129C. An I/O channel160G connects the I/O channel switch (1)2501B with a channel adaptor (1)129B. An I/O channel160H connects the I/O channel switch (1)2501B with a channel adaptor (3)129D.
The I/O channel switches (0)2501A and (1)2501B are connected to acontrol terminal150 via anetwork170.
A functional block diagram of the fourth embodiment is similar to that of the first embodiment except for interconnection between theserver system100 and thestorage system120 via the I/O channel160 and the I/O channel switch2501. A power supply system of each apparatus of the fourth embodiment, tables held by the apparatus, and processing executed by the apparatus are similar to those of the first embodiment except for those described below. Description of components of the fourth embodiment similar to those of the first embodiment will be omitted.
FIG. 26 is a block diagram showing a hardware configuration of the I/O channel switch2501 according to the fourth embodiment of this invention.
The I/O channel switch2501 includes apower control unit2601, aswitch control unit2602, a LAN adaptor (6)2604, acrossbar switch2605, and a plurality of ports2606.
Referring toFIG. 27, thepower control unit2601 controls power supply to each physical resource in the I/O channel switch2501.
Referring toFIG. 28, theswitch control unit2602 controls connection between the ports2606 by thecrossbar switch2605. Specifically, theswitch control unit2602 stets a combination of ports2606 to permit/inhibit communication. Theswitch control unit2602 holds information indicating the set combination of the ports2606 as a routing table2603.
The LAN adaptor (6)2604 is an interface for communication with an apparatus such as thecontrol terminal150 via thenetwork170.
Thecrossbar switch2605 switches connection between the ports2606. Specifically, thecrossbar switch2605 permits/inhibits communication between the ports2606 according to setting of theswitch control unit2602.
Each port2606 is connected to the I/O channel160, and communicates with theserver system100 or thestorage system120 via the I/O channel160.FIG. 26 shows fourports2606A,2606B,2606C and2606D. However, the I/O channel switch2501 may include more ports2606.
FIG. 27 is an explanatory diagram of a power supply system of the I/O channel switch2501 according to the fourth embodiment of this invention.
The power supply system of the I/O channel switch2501 includes anAC power source2701, asystem power switch2702, apower control unit2601, and physical resources (i.e., each port2606,crossbar switch2605,switch control unit2602, and LAN adaptor2604) for receiving power supplies.
TheAC power source2701 is a source of power supplied to the I/O channel switch2501. TheAC power source2701 may be any types of AC power source as in the case of theAC power source201 shown inFIG. 2.
Thesystem power switch2702 is a switch similar to thesystem power switch202 ofFIG. 2.
Thepower control unit2601 receives power supplied from theAC power source2701 via thesystem power switch2702 and controls supplying of the power to each physical resource.
The power supply system of the I/O channel switch2501 is not divided into a plurality of power supply areas as shown inFIG. 2. Thepower control unit2601 can control supplying/cutting of power for each port2606.
FIG. 28 is an explanatory diagram of a routing table2603 held by the I/O channel switch2501 according to the fourth embodiment of this invention.
The routing table2603 includes aninput port number2801, anoutput port number2802, and information indicating communication permission between the ports2606. Theinput port number2801 andoutput port number2802 are identifiers assigned to the ports2606. In the routing table2603, the information indicating communication permission is represented by “o” when communication is permitted, and “x” when communication is inhibited.
For example, according to the routing table2603 shown inFIG. 28, data input to a port (0)2606 can be output from a port (1)2606, while the data input to the port (0)2606 cannot be output from a port (n)2606. Thus, in this case, an apparatus connected to the port (0)2606 can transmit data to an apparatus connected to the port (1)2606, but cannot transmit data to an apparatus connected to the port (n)2606. The ports (0)2606, (1)2606, and (n)2606 are ports respectively having identifiers “0”, “1” and “n”.
At the time of booting the computer system of the fourth embodiment, only “o” is set in the routing table2603, while “x” is not set. In other words, at the time of booting, communication is permitted among all the ports2606. Referring toFIGS. 31 and 32, after creation of a storage resources control table621, for example, permission/inhibition of the communication between the ports2606 is set in response to the instruction by the user.
FIG. 29 is an explanatory diagram of the storage resources control table621 according to the fourth embodiment of this invention.
The storage resources control table621 of the fourth embodiment has a configuration in which columns (i.e.,storage port2901 and server port2902) regarding the ports2606 are added to the storage resources control table621 of the first embodiment shown inFIG. 10.
Avirtual machine number1001, a virtualstorage system number1002, avirtual disk number1003, adisk cache capacity1004, aCPU1005 in charge, theinternal bandwidth1006, avirtual channel adaptor1007, achannel adaptor1008, an I/O adaptor1009, and a virtual I/O adaptor1010 in the storage resources control table621 of the fourth embodiment are similar to those ofFIG. 10, and thus description thereof will be omitted.
Thestorage port2901 indicates an identifier of the port2606 connected to thechannel adaptor129 indicated by thechannel adaptor1008.
Theserver port2902 indicates an identifier of the port2606 connected to the I/O adaptor106 indicated by the I/O adaptor1009.
For example, in the example ofFIG. 29, “10” and “2” are registered as thestorage port2901 and theserver port2902 corresponding to a value “0” of thechannel adaptor1008 and a value “0” of the I/O adaptor1009. This means that the channel adaptor (0)129A is connected to the port (10)2606, the I/O adaptor (0)106A is connected to the port (2)2606, and communication between the ports (10)2606 and (2)2606 is permitted.
FIG. 30 is a flowchart of boot processing of the virtual machine602 executed according to the fourth embodiment of this invention.
InFIG. 30,Steps3001 and3002 are respectively similar toSteps1401 and1402 ofFIG. 14, and thus, description thereof will be omitted.
After an end ofStep3001 or3002, the user determines whether thesystem power switch2702 of the I/O channel switch2501 has been turned on (3003).
If it is determined inStep3003 that thesystem power switch2702 has been turned on, the process proceeds to Step3005. On the other hand, if it is determined inStep3003 that thesystem power switch2702 has not been turned on, the user turns on the system power switch2702 (3004). Then, the process proceeds to Step3005.
Steps3005 to3017 ofFIG. 30 are respectively similar toSteps1403 to1415 ofFIG. 14, and thus description thereof will be omitted.
FIG. 31 is a flowchart of processing executed at the time of cable connection according to the fourth embodiment of this invention.
First, theserver system100, thestorage system120, and the I/O channel switch2501 detect connection of a cable (i.e., I/O channel160) (3101).
Then, theserver system100, thestorage system120, and the I/O channel switch2501 exchange physical addresses with apparatus communicable via the detected cable (i.e., apparatus connected via the detected cable) (3102). In the example ofFIG. 25, theserver system100 obtains a physical address of the port2606 connected to the I/O adaptor106. The I/O channel switch2501 obtains physical addresses of the I/O adaptor106 and thechannel adaptor129 that are connected to the ports2606. Thestorage system120 obtains a physical address of the port2606 connected to thechannel adaptor129.
Any physical addresses may be used for exchanging inStep3102 as long as the ports to which the cable is connected are uniquely specified as described above referring toFIG. 15 (e.g., WWN or MAC address).
Next, theserver system100, thestorage system120, and the I/O channel switch2501 transmit the cable connection state obtained inStep3102 to thecontrol terminal150 via the network170 (3103). The cable connection state means a set of physical addresses of the I/O adaptor106 and the port2606 connected to each other, or a set of physical addresses of the port2606 and thechannel adaptor129 connected to each other.
Thus, the processing executed at the time of cable connection according to the fourth embodiment is finished. By this processing, a correlation between thechannel adaptor1008 and thestorage port2901, and a correlation between theserver port2902 and the I/O adaptor1009 in the storage resources control table621 ofFIG. 29 are discovered.
FIG. 32 is a flowchart of processing executed to create the routing table2603 according to the fourth embodiment of this invention.
Theswitch control unit2602 creates the routing table2603 based on the storage resources control table621 ofFIG. 29 which is created by the processing inFIG. 31 (3202). Specifically, the user may refer to the storage resources control table621 to input permission of communication with a certain port2606 to thecontrol terminal150. Thecontrol terminal150 transmits information input by the user to the I/O channel switch2501 via thenetwork170.
As described above referring toFIG. 28, the routing table2603 permits communication with all the ports2606 in an initial state. In Step3202, theswitch control unit2602 inhibits communication for a set other than that of the ports2606 permitted to communicate by the user.
Thus, the creation of the routing table2603 is finished.
FIG. 33 is a flowchart of shutdown processing of the virtual machine602 executed according to the fourth embodiment of this invention.
According to the fourth embodiment, when the virtual machine602 shuts down, the same processing as that of the first embodiment is executed as shown inFIG. 16. Furthermore, according to the fourth embodiment, to cut power of the port2606 of the I/O channel switch2501, processing ofFIG. 33 is executed. As in the case ofFIG. 16, the virtual machine602 which is to be powered off by the user is described as a relevant virtual machine602.
First, thecontrol terminal150 notifies shutting-down of the relevant virtual machine602 to the I/O channel switch2501 (3301).
Then, theswitch control unit2602 specifies a port2606 used by the relevant virtual machine (i.e., port2606 allocated to the relevant computer) based on the storage resources control table621 and the routing table2603 (3302).
Next, theswitch control unit2602 executes loop processing for each port specified in Step3302 (3303 to3306). In this case, each port specified inStep3302 is described as a relevant port2606.
InStep3304, theswitch control unit2602 determines whether the relevant port2606 is used by the other virtual machine602. In other words, thehypervisor103 determines whether the relevant port2606 has been allocated to the virtual machine602 other than the relevant virtual machine602. In this case, thevirtual machine number1001, thestorage port2901, and theserver port2902 of the storage resources control table621 are referred to.
In the example ofFIG. 29, the port (10)2606 is allocated to the virtual machines (0)602A and (1)602B. When the virtual machine (0)602A is a relevant virtual machine602, and the port (10)2606 is a relevant port2606, it is determined that the relevant port2606 has also been allocated to the virtual machine602 other than the relevant virtual machine602.
If it is determined inStep3304 that the relevant port2606 has also been allocated to the virtual machine602 other than the relevant virtual machine602, the relevant port2606 is still used by one of the virtual machines602 even after the relevant virtual machine602 shuts down. Accordingly, the power of the relevant port2606 cannot be cut off. In this case, the process proceeds to Step3306 without cutting off the power of the relevant port2606.
On the other hand, if it is determined inStep3304 that the relevant port2606 has not been allocated to the virtual machine other than the relevant virtual machine602, after the relevant virtual machine602 shuts down, the relevant port2606 is not used by any one of the virtual machines602. Thus, theswitch control unit2602 cuts off power of the relevant port2606 (3305).
If the loop processing has not been finished for all the relevant ports2606, the process returns to Step3304 to execute processing for remaining relevant ports2606 (3306).
After an end of the loop processing for all the relevant ports2606, the processing ofFIG. 33 is finished.
Next, a fifth embodiment of this invention will be described. Differences of the fifth embodiment from the first embodiment will mainly be described below. Thus, points of the fifth embodiment not described are similar to those of the first embodiment.
According to the first embodiment, thesever100 executes the application program which uses thestorage system120. On the other hand, according to the fifth embodiment, a client connected to anetwork170 executes an application program which uses a storage system. In this case, a server system operates as a file server for providing files to the client.
FIG. 34 is a functional block diagram of a computer system according to the fifth embodiment of this invention.
The computer system of the fifth embodiment includes clients (0)3401A and (1)3401B, file server systems (0)3403A and (1)3403B, astorage system3405, and acontrol terminal150. A configuration of the computer system is similar to that of the first embodiment except for connection of the clients (0)3401A and (1)3401B to thenetwork170. The file server system3403 corresponds to theserver system100 of the first embodiment. The fifth embodiment will be described below in detail.
Each of the clients (0)3401A and (1)3401B is a computer which includes a CPU (not shown), a memory (not shown), and a LAN adaptor (not shown). A memory of each client3401 stores a program (not shown) for implementing a hypervisor in addition to an application program (not shown). By executing this program, virtual clients (0)3402A and (1)3402B are generated for the client (0)34011A, and virtual clients (2)3402C and (3)3402D are generated for the client (1)3401B.
The file server systems (0)3403A and (1)3403B are computers for providing files to the client3401 via thenetwork170. Hardware configurations of the file server systems (0)3403A and (1)3403B are respectively similar to those of the server systems (0)100A and (1)100B of the first embodiment, and thus description thereof will be omitted.
Functional block diagrams of the file server systems (0)3403A and (1)3403B are also similar to those of the server systems (0)100A and (1)100B of the first embodiment as shown inFIG. 6B, and thus detailed description thereof will be omitted. Each file server system3403 includes ahypervisor103 for implementing a virtual file server system3404. Virtual file server systems (0)3404A to (3)3404D ofFIG. 34 respectively correspond to the virtual machines (0)603A to (3)603D ofFIG. 6B. An OS603 operates in each virtual file server system3404.
However, a server resources control table653 of the fifth embodiment is different from that of the first embodiment as shown inFIG. 35. Virtual file server system shutdown processing executed according to the fifth embodiment is different from the virtual machine shutdown processing executed according to the first embodiment as shown inFIG. 36. Additionally, the virtual file server system3404 of the fifth embodiment includes a virtual LAN adaptor (not shown).
Referring toFIGS. 1C to 1E, a hardware configuration of thestorage system3405 is similar to that of thestorage system120 of the first embodiment, and thus description thereof will be omitted. A functional block diagram of thestorage system3405 is also similar to that of thestorage system120 of the first embodiment as shown inFIG. 6C, and thus description thereof will be omitted. However, thestorage system3405 of the fifth embodiment includes four virtual storage systems, i.e., virtual storage systems (0)3406A to (3)3406D.
In the example ofFIG. 34, the virtual file server system (0)3404A is allocated to the virtual client (0)3402A, and the virtual storage system (0)3406A is allocated to the virtual file server system (0)3404A. In other words, the virtual client (0)3402A issues a file writing or reading request to the virtual file server system (0)3404A. The virtual file server system (0)3404A executes data writing/reading to/from the virtual storage system (0)3406A in response to the request from the virtual client (0)3402A. Then, the virtual file server system (0)3404A returns a result of the executed writing/reading to/from the virtual client (0)3402A.
Similarly, in the example ofFIG. 34, the virtual file server system (1)3404B is allocated to the virtual client (1)3402B, and the virtual storage system (1)3406B is allocated to the virtual file server system (1)3404B. The virtual file server system (2)3404C is allocated to the virtual client (2)3402C, and the virtual storage system (2)3406C is allocated to the virtual file server system (2)3404C. The virtual file server system (3)3404D is allocated to the virtual client (3)3402D, and the virtual storage system (3)3406D is allocated to the virtual file server system (3)3404D.
Referring toFIG. 35, thecontrol terminal150 is similar to that of the first embodiment, and thus description thereof will be omitted. However, the server resources control table623 included in the virtualmachine control program151 is different from that of the first embodiment.
FIG. 35 is an explanatory diagram of the server resources control table623 according to the fifth embodiment of this invention.
Avirtual machine number701, aCPU utilization rate702, amemory capacity703, a virtual I/O adaptor number704, and an I/O adaptor number705 in the server resources control table623 of the fifth embodiment are similar to those of the first embodiment as shown inFIG. 7, and thus description thereof will be omitted. However, an identifier of the virtual file server system3404 is registered in thevirtual machine number701.
The server resources control table623 of the fifth embodiment has a configuration in which two columns of a virtualLAN adaptor number3501 and aLAN adaptor number3502 are added to the server resources control table623 of the first embodiment. An identifier of the virtual LAN adaptor included in each virtual file server system3404 is registered in the virtualLAN adaptor number3501. An identifier of aLAN adaptor105 allocated to each virtual LAN adaptor is registered in theLAN adaptor number3502.
FIG. 36 is a flowchart of shutdown processing of the virtual file server system3404 executed according to the fifth embodiment of this invention.
The processing ofFIG. 36 is executed when power of one of the virtual file server systems3404 is cut off. In the description ofFIG. 36, the virtual file server system3404 which is to be powered off by the user is described as a relevant virtual file server system3404.
First, the user operates thecontrol terminal150 to instruct shutting-down to the OS603 operating in the relevant virtual file server system3404 (3601).
Next, the relevant virtual file server system3404 notifies the shutting-down to the virtual client3402 which uses the relevant virtual file server system3403 (3602).
Steps3603 to3609 are respectively similar toSteps1602 to1608 ofFIG. 16, and thus description thereof will be omitted. “Virtual machine602” in the description ofFIG. 16 corresponds to “virtual file server system3404” ofFIG. 36.
Next, thehypervisor103 updates the server power control table651 to reflect the executed cutting of the power (3610).
Next, thehypervisor103 reports the cutting of power to the control terminal150 (3611). Thecontrol terminal150 updates the server power control table624 to reflect the reported cutting of power.
Next Steps3612 to3617 are respectively similar to Steps1611 to1616 ofFIG. 16, and thus description thereof will be omitted.
Then, thestorage hypervisor612 updates the storage power control table664 to reflect the executed cutting of power (3618).
Thus, the shutdown processing of the virtual file server system3404 is finished.
Next, a sixth embodiment of this invention will be described.
In a computer system that must have high access performance or high fault tolerance, redundancy may be given to resources constituting the system. Depending on a situation of using the computer system, power consumption reduction may take precedence over ensuring of performance or fault tolerance. In such a case, maintenance of redundant resources impedes the reduction of power consumption. The system of reducing power consumption by controlling redundancy of resources based on required performance or fault tolerance according to the sixth embodiment will be described below.
The configuration of the first embodiment descried above referring toFIGS. 1 to 16 is applied to the sixth embodiment. Differences of the sixth embodiment from the first will be described.
FIG. 37 is an explanatory diagram when two virtual machines602 operate in the computer system of the sixth embodiment.
InFIG. 37, sections unnecessary for explanation (e.g., control terminal150) are omitted.
In a server system (0)100A ofFIG. 37, virtual machines (0)602A and (1)602B are fully run (full on).
The virtual machine (0)602A uses a virtual storage system (0)613A. On the other hand, the virtual machine (1)602B uses a virtual storage system (1)613B.
For an access path from the virtual machine (0)602A to the virtual storage system (0)613A, there are two paths, i.e., a path from an I/O adaptor (0)106A through an I/O channel160A to reach a channel adaptor (0)129A of a channel board (0)121A, and a path from an I/O adaptor (1)106B through an I/O channel160B to reach a channel adaptor (2)129C of a channel board (1)121B. Those two paths are also used for reaching the virtual storage system (1)613B from the virtual machine (1)602B.
InFIG. 37, an area surrounded with a solid-line curve indicates resources allocated to the virtual machine (0)602A. An area surrounded with a dotted-line curve indicates resources allocated to the virtual machine (1)602B.
The sixth embodiment will be described by way of example in which redundancy is given to resources constituting the access path.
For example, when one of the virtual machines602 uses one of the access paths, the other virtual machine602 uses the other access path so that concentration of loads is prevented. When a fault occurs in one of the two access paths, the other of the two virtual machines602 uses the other access path so that system down can be prevented. Accordingly, high performance and high fault tolerance are realized by giving redundancy to the resources constituting the access path.
FIG. 38 is an explanatory diagram when one of the virtual machines602 shuts down in the computer system according to the sixth embodiment of this invention.
For example, when the virtual machine (1)602B shuts down, shutdown processing shown inFIG. 16 for the virtual machine (1)602B is executed. As a result, power of physical resources used only by the virtual machine (1)602B (physical resources allocated to the virtual machine (1)602B only) is cut off.
However, the two access paths are also used by the virtual machine (0)602A. Thus, power of physical resources constituting the two access paths is not cut off. To ensure performance and fault tolerance, the two access paths should preferably be maintained. In this case, however, the physical resources constituting the access paths continuously consume power.
FIG. 39 is an explanatory diagram when redundancy of the resources is released in the computer system according to the sixth embodiment of this invention.
FIG. 39 shows a state where allocation of one of the two access paths shown inFIG. 38 (path from the I/O adaptor (1)106B through the I/O channel160B to the channel adaptor (2)129C of the channel board (1)121B) to the virtual machine (0)602A is released. As a result, power of at least the I/O adaptor (1)106B and the channel board (1)121B can be cut off (as long as those are not allocated to the other virtual resources). Thus, power consumption of the computer system is reduced.
FIG. 40 is an explanatory diagram of processing executed to cut off power of redundant resources according to the sixth embodiment of this invention.
First, thecontrol terminal150 determines whether a virtual resource of a processing target (e.g., virtual machine602 or virtual storage system613) has been set to “power saving priority mode” (4001). This determination is made by referring to a table described below with reference toFIGS. 41 and 42.
If it is determined inStep4001 that the virtual resource has not been set to the power saving priority mode, resource redundancy cannot be released. In this case, the process is finished without cutting off power of the resource.
On the other hand, if it is determined inStep4001 that the virtual resource has been set to the power saving priority mode, thecontrol terminal150 determines whether redundant resources are present (4002). The redundant resources mean multiple physical resources allocated to a single virtual resource (e.g., virtual machine602) such as the physical resources constituting the two access paths shown inFIG. 38.
If it is determined inStep4002 that no redundant resource is present, the resource redundancy cannot be released. Thus, the process is finished without cutting off power of the resources.
On the other hand, if it is determined inStep4002 that when redundant resources are present, to release allocation of the redundant resources to the virtual resource of a processing target, thecontrol terminal150 updates a server resources control table623 shown inFIG. 7 and a storage resources control table622 shown inFIG. 10 (4003).
Next, thecontrol terminal150 cuts off power of the redundant resources according to the tables updated in Step4003 (4004).
For example, a case of releasing allocation of the access path from the I/O adaptor (1)106B through the I/O channel160B to the channel adaptor (2)129C of the channel board (1)121B shown inFIGS. 38 and 39 will be described. In this case, inStep4003, “1” is deleted from an I/O adaptor number705 corresponding to a value “0” of avirtual machine number701. Additionally, “2” is deleted from achannel adaptor1008 corresponding to a value “0” of a virtualstorage system number1002. Then, inStep4004, power of the I/O adaptor (1)106B and the channel adaptor (2)129C is cut off.
In the description ofFIG. 40, the process is executed by thecontrol terminal150. However, similar processing may be executed by ahypervisor103 or astorage hypervisor612.
FIG. 41 is an explanatory diagram of a server system power saving mode table according to the sixth embodiment of this invention.
For example, the server system power saving mode table may be held by thecontrol terminal150. When the processing shown inFIG. 40 is executed by thehypervisor103 or thestorage hypervisor612, the server system power saving mode table may be held by thehypervisor103 or thestorage hypervisor612.
The server system power saving mode table includes at least two columns of avirtual machine number4101 and a powersaving priority mode4102.
An identifier of a virtual machine602 is registered in thevirtual machine number4101.
Information indicating a level of a power consumption reduction of each virtual machine602 is registered in the powersaving priority mode4102. Based on the level registered in the powersaving priority mode4102, determination is made as to whether each virtual machine602 has been set to a power saving priority mode. For example, when the level registered in the powersaving priority mode4102 is equal to or higher than a predetermined value, it may be determined that the virtual machine602 has been set to the power saving priority mode. If the virtual machine602 has been set to the power saving priority mode, a reduction of power consumption takes precedence over ensuring of resource redundancy.
In the example ofFIG. 41, one of two values “on” and “off” is registered as the powersaving priority mode4102. “On” indicates that the virtual machine602 has been set to the power saving priority mode, and “off” indicates that the virtual machine602 has not been set to the power saving priority mode. In the example ofFIG. 41, the virtual machine (0)602A has been set to the power saving priority mode.
FIG. 42 is an explanatory diagram of a storage system power saving mode table according to the sixth embodiment of this invention.
For example, the storage system power saving mode table may be held by thecontrol terminal150. When the processing shown inFIG. 40 is executed by thehypervisor103 or thestorage hypervisor612, the storage system power saving mode table may be held by thehypervisor103 or thestorage hypervisor612.
The storage system power saving mode table includes at least two columns of a virtualstorage system number4201 and a powersaving priority mode4202.
An identifier of a virtual storage system613 is registered in the virtualstorage system number4201.
Information indicating a level of a power consumption reduction of each virtual storage system613 is registered in the powersaving priority mode4202. As in the case of the powersaving priority mode4102, based on the level registered in the powersaving priority mode4202, determination is made as to whether each virtual storage system613 has been set to a power saving priority mode.
In the example ofFIG. 42, one of two values “on” and “off” is registered as the powersaving priority mode4202. “On” indicates that the virtual storage system613 has been set to the power saving priority mode, and “off” indicates that the virtual storage system613 has not been set to the power saving priority mode. In the example ofFIG. 42, the virtual storage system (0)613A has been set to the power saving priority mode.
FIG. 43 is an explanatory diagram of an input screen used for allocating resources according to the sixth embodiment of this invention.
FIG. 43 shows an input screen for setting a virtual machine (0)602A as an example. For example, this input screen is displayed by a screen display (not shown).
In the example ofFIG. 43, three CPU's101, a disk cache144 of 2 GB, a 20%internal bandwidth1006, and threevirtual disks665 are allocated to the virtual machine (0)602A. The virtual machine (0)602A is set to a power saving priority mode. A user of thecontrol terminal150 can enter an optional value in the input screen ofFIG. 43.
The sixth embodiment has been described by taking the example of giving redundancy to the access path. However, even when other resources are redundant, similar processing to that described above can be applied to the resources.
Next, a seventh embodiment of this invention will be described.
The configuration of the first embodiment described above referring toFIGS. 1 to 16 is applied to the seventh embodiment. Differences of the seventh embodiment from the first will be described below.
FIG. 44 is an explanatory diagram of processing executed to cut off power of a disk cache144 according to the seventh embodiment of this invention.
First, astorage hypervisor612 calculates a capacity necessary for the disk cache144 (4401). Specifically, thestorage hypervisor612 refers to a storage resources control table663 as shown inFIG. 10 to sum up values ofdisk cache capacities1004 allocated to all virtual storage systems613.
Next, thestorage hypervisor612 determines whether power of the disk cache144 can be cut off (4402). In other words, thestorage hypervisor612 determines whether there is a disk cache144 which does not hinder running of the virtual storage system613 even when its power is cut off. Specifically, when the capacity calculated inStep4401 is smaller than a total capacity of all disk caches144 disposed in astorage system120, a capacity difference thereof is not allocated to any virtual storage system613. Thus, even when power of the disk cache144 equivalent to the capacity difference is cut off, running of the virtual storage system613 is not hindered.
For example, when a capacity calculated inStep4401 is 10 gigabytes (GB), and a total of capacities of disk caches144 actually mounted is 16 GB, even when power of the disk cache144 of 6 GB is cut off, the running of the virtual storage system613 is not hindered. Accordingly, it is possible to cut off power of a disk cache144 of maximum 6 GB.
If it is determined inStep4402 that power of the disk cache144 cannot be cut off, the process is finished without cutting off the power of the disk cache144.
On the other hand, if it is determined inStep4402 that the power of the disk cache144 can be cut off, thestorage hypervisor612 cuts off the power of the disk cache144 which does not hinder the running of the virtual storage system613 even when its power is cut off (4403).
For example, the power of the disk cache144 may be cut off for each physical memory module. Alternatively, cutting of power may be permitted for each memory bank. Alternatively, cutting of power may be permitted for each memory page. When power of the memory of 6 GB is cut off as in the case of the aforementioned example, in the case of cutting off power for each memory module of 4 GB, power of only one memory module can be cut off. It is because a cache capacity of 10 GB cannot be ensured when power of two memory modules is cut off. Thus, when the seventh embodiment is applied, the power of the disk cache144 should preferably be cut off by a unit as small as possible (e.g., each page).
In the example ofFIG. 1E, when power of the two disk caches144 is cut off, it is not preferable to cut off power of two disk caches144 (e.g., disk caches (0)144A and (1)144B) in the single disk cache board142. It is because since remaining disk caches (e.g., disk caches (2)144C and (3)144D) are in the single disk cache board142 (e.g., disk cache board (1)142B), all cache data are lost by a fault of the disk cache board142.
Accordingly, to use a plurality of disk cache boards142 in any time, the power of the disk cache144 should preferably be cut off. In the above-mentioned case, for example, it is advised to cut off power of the disk caches (0)144A and (2)144C.
According to the seventh embodiment, it is possible to reduce power consumption by cutting off the power of the disk cache144 which does not hinder the running of the virtual storage system613 even when its power is cut off.
Next, an eighth embodiment of this invention will be described.
The configuration of the first embodiment described above referring toFIGS. 1 to 16 is applied to the eighth embodiment except for differences described below. Differences of the eighth embodiment from the first will be described below.
The computer system of the first embodiment includes onestorage system120, and twoserver systems100 are connected to onestorage system120. On the other hand, a computer system of the eighth embodiment includes twostorage systems120, and oneserver system100 is connected to eachstorage system120. Remote copying is carried out between the twostorage systems120.
FIG. 45 is a functional block diagram of the computer system according to the eighth embodiment of this invention.
The computer system of the eighth embodiment includesserver systems100A and100B, andstorage systems120A and120B.
Hardware configurations and functional block diagrams of theserver systems100A and100B are similar to those of theserver system100 of the first embodiment, and thus detailed description thereof will be omitted. Theserver system100A includes virtual machines (0)602A and (1)602B. Theserver system100B includes virtual machines (2)602C and (3)602D.
Hardware configurations and functional block diagrams of thestorage systems120A and120B are similar to those of thestorage system120 of the first embodiment, and thus detailed description thereof will be omitted.
Thestorage system120A includes virtual storage systems (0)613A and (1)613B. The virtual storage system (0)613A includes virtual disks (0)665A and (1)665B. The virtual storage system (1)613B includes virtual disks (16)665C and (17)665D.
Thestorage system120B includes a virtual storage system (1′)613C. Thestorage system120B may include more virtual storage systems613. The virtual storage system (1′)613C includes virtual disks (16′)665E and (17′)665F.
According to the eighth embodiment, the virtual storage system (0)613A is allocated to the virtual machine (0)602A. In other words, the virtual machine (0)602A uses the virtual storage system (0)613A. Similarly, the virtual storage system (1)613B is allocated to the virtual machine (1)602B. The virtual storage system (1′)613C is allocated to the virtual machine (2)602C.
Thestorage systems120A and120B are installed in places geographically separate from each other. The virtual storage systems (1)613B and (1′)613C are interconnected via aremote network4501. For example, theremote network4501 is a wide-area communication network to which Internet protocol (IP) is applied.
Remote copying is carried out between the virtual storage systems (1)613B and (1′)613C. The remote copying is a technology for copying data stored in thestorage system120 to another storage system to prevent a loss of data caused by a disaster or a system fault. More specifically, a copy of the data stored in thestorage system120 is transmitted to another storage system to be stored in astorage system120 of a transmission destination.
In the example ofFIG. 45, the virtual machine (1)602B requests writing of data in the virtual disk (16)665C or (17)665D. The virtual storage system (1)613B writes data in response to the request, and transmits a copy of data written in the virtual disks (16)655C and (17)665D to the virtual storage system (1′)613C via theremote network4501.
The virtual storage system (1′)613C writes the data written in the virtual disk (16)665C in a virtual disk (16′)665E, and the data written in the virtual disk (17)665D in a virtual disk (17′)665F. As a result, the same data is stored in the virtual disks (16)665C and (16′)665E, and the same data is stored in the virtual disks (17)665D and (17′)665F.
Thus, a set ofvirtual disks665 that store the same data as a result of the remote copying is described as “pair”. In the description below, of thevirtual disks665 belonging to a pair, avirtual disk665 which becomes a copying source of data is described as “primary virtual storage disk613”. A virtual storage system613 that includes the primaryvirtual disk665 is described as “primary virtual storage system613”. Avirtual disk665 that becomes a copying destination of data is described as “secondaryvirtual disk665”. A virtual storage system613 that includes the secondaryvirtual disk665 is described as “secondary virtual storage system613”.
In the example ofFIG. 45, the virtual storage system (1)613B is a primary virtual storage system613. The virtual storage system (1′)613C is a secondary virtual storage system613. The virtual disks (16)665C and (17)665D are primaryvirtual disks665. The virtual disks (16′)665E and (17′)665F are secondaryvirtual disks665.
When data is written in the primaryvirtual disk665, the writing may be not be immediately copied to the secondaryvirtual disk665. For example, when theremote network4501 is congested, data transmission for copying may be executed after the congestion is solved.
FIG. 46 is an explanatory diagram of a virtual disk control table622 according to the eighth embodiment of this invention.
The virtual disk control table622 holds information for managing allocation of a virtual disk655 to the virtual machine602. Additionally, the virtual disk control table622 of the eighth embodiment holds information for managing a pair ofvirtual disks665.
Specifically, the virtual disk control table622 includes seven columns of avirtual machine number801, avirtual storage number802, alogical unit number803, avirtual disk number804, apair state4601, a secondaryvirtual storage number4602, and a secondaryvirtual disk number4603.
Referring toFIG. 8, thevirtual machine number801, thevirtual storage number802, thelogical unit number803, and thevirtual disk number804 are similar to those of the first embodiment, and thus description thereof will be omitted.
Information regarding a pair to which thevirtual disk665 indicated by thevirtual disk number804 is registered in thepair state4601. Specifically, if thevirtual disk665 does not belong to the pair, “0” is registered in thepair state4601 corresponding to thevirtual disk665. If thevirtual disk665 belongs to the pair, and thevirtual disk665 is a primaryvirtual disk665, “1” is registered in thepair state4601 corresponding to thevirtual disk665. If thevirtual disk665 belongs to the pair, and thevirtual disk665 is a secondaryvirtual disk665, “2” is registered in thepair state4601 corresponding to thevirtual disk665.
Information indicating the secondaryvirtual disk665 belonging to the same pair as that of thevirtual disk665 indicated by thevirtual disk number804 is registered in the secondaryvirtual storage number4602 and the secondaryvirtual disk number4603. An identifier of the virtual storage system613 including the secondaryvirtual disk665 is registered in the secondaryvirtual storage number4602, and an identifier of the secondaryvirtual disk665 is registered in the secondaryvirtual disk number4603. InFIG. 46, “none (n/a)” is shown in the secondaryvirtual storage number4602 and the secondaryvirtual disk number4603 corresponding to thevirtual disk665 not belonging to the pair.
In the example ofFIG. 46, “0”, “n/a”, and “n/a” are registered in thepair state4601, the secondaryvirtual storage number4602, and the secondaryvirtual disk number4603 corresponding to a value “121” of thevirtual disk number804. Those indicate that thevirtual disk665 having an identifier “121” does not belong to the pair.
“1”, “1′”, and “16″” are registered in thepair state4601, the secondaryvirtual storage number4602, and the secondaryvirtual disk number4603 corresponding to a value “16” of thevirtual disk number804. Those indicate that the virtual disk (16)665C belongs to the pair as a primaryvirtual disk665, and the secondaryvirtual disk665 of the pair is a virtual disk (16′)665E of the virtual storage system (1′)613C.
As in the case of the first embodiment, eachstorage system120 holds a virtual disk control table661 including information similar to that of the virtual disk control table622. Accordingly, in the description below,FIG. 46 is referred to also as an explanatory diagram of the virtual disk control table661. However, the virtual disk control table661 may include only information regarding astorage system120 which holds the virtual disk control table661.
FIG. 47 is a flowchart of processing executed by thestorage system120 when the virtual machine602 shuts down according to the eighth embodiment of this invention.
As an example, referring toFIG. 47, processing executed by the virtual storage system (1)613B when the virtual machine (1)602B shown inFIG. 45 shuts down will be described.
First, the virtual machine (1)602B shuts down (4701).
Then, the virtual storage system (1)613B receives a shutting-down instruction (4702). As in the case of Step1611 ofFIG. 16, this is an instruction to power off the virtual storage system (1)613B. For example, the instruction is transmitted from thecontrol terminal150.
The virtual storage system (1)613B refers to the virtual disk control table661 shown inFIG. 46 (4703).
The virtual storage system (1)613B determines whether the virtual storage system (1)613B includes avirtual disk665 belonging to a pair by referring to the virtual disk control table661 (4704). Specifically, when at least one of values of thepair state4601 corresponding to a value “1” of thevirtual storage number802 is “1” or “2”, it is determined that the virtual storage system (1)613B includes thevirtual disk665 belonging to the pair.
If it is determined inStep4704 that the virtual storage system (1)613B does not include thevirtual disk665 belonging to the pair, the virtual storage system (1)613B shuts down (4705) and the process is finished. In this case, Steps1612 to1618 ofFIG. 16 are executed inStep4705.
On the other hand, if it is determined inStep4704 that the virtual storage system (1)613B includes thevirtual disk665 belonging to the pair, the virtual storage system (1)613B determines whether thevirtual disk665 belonging to the pair is a primary virtual disk665 (4706).
If it is determined inStep4706 that thevirtual disk665 belonging to the pair is not a primaryvirtual disk665, the virtual storage system (1)613B includes a secondaryvirtual disk665. In this case, as the virtual storage system (1)613B cannot be shut down, the process is finished without executing shutting-down. The reason is as follows.
When the virtual storage system613 including the secondaryvirtual disk665 shuts down before a copy of data written in the primaryvirtual disk665 is transmitted to the secondaryvirtual disk665, inconsistency of data occurs between the primaryvirtual disk665 and the secondaryvirtual disk665. Use of the inconsistent data may be inhibited. The secondary virtual storage system613 cannot know whether there still remains data not copied in the secondaryvirtual disk665 until it obtains information from the primary virtual storage system613. Accordingly, the secondary virtual storage system613 cannot be shut down according to a shutting-down instruction from thecontrol terminal150.
However, as described below referring toFIG. 48, the secondary virtual storage system613 can be shut down upon reception of a shutting-down instruction from the primary virtual storage system613.
On the other hand, if it is determined inStep4706 that thevirtual disk665 belonging to the pair is a primaryvirtual disk665, the virtual storage system (1)613B determines whether there still remains data not transmitted to the secondary virtual storage system613 (4707). The data not having been transmitted is a copy of data written in the primaryvirtual disk665, and not transmitted to the secondary virtual storage system613 yet.
If it is determined inStep4707 that there still remains data not having been transmitted, the virtual storage system (1)613B transmits the untransmitted data (4708). The secondary virtual storage system613 that has received the data writes the received data in the secondaryvirtual disk665.
Next, the virtual storage system (1)613B transmits a shutting-down instruction to the secondary virtual storage system (in the example ofFIG. 45, the virtual storage system (1′)613C) (4709).
On the other hand, if it is determined inStep4707 that there does not remain data not having been transmitted, the virtual storage system (1)613B executesStep4709 without executingStep4708.
Next, the virtual storage system (1)613B determines whether the virtual storage system (1)613B is allocated to the other virtual machine602 (4710). The other virtual machine602 means a virtual machine602 other than the virtual machine (1)602A shut down inStep4701. The determination ofStep4701 is executed by referring to thevirtual machine number801 and thevirtual storage number802 of the virtual disk control table622 shown inFIG. 46.
If it is determined inStep4710 that the virtual storage system (1)613B has been allocated to the other virtual machine602, even after the virtual machine (1)602A shuts down, the virtual storage system (1)613B is used by the other virtual machine602. Accordingly, the process is finished without shutting down the virtual storage system (1)613B.
On the other hand, if it is determined inStep4710 that the virtual storage system (1)613B has not been allocated to any other virtual machines602, after the virtual machine (1)602A shuts down, the virtual storage system (1)613B is not used by any virtual machines602. In this case, shutting-down of the virtual storage system (1)613B is executed (4711) and the process is finished. In this case, inStep4711,Steps1612 to1618 ofFIG. 16 are executed.
FIG. 48 is a flowchart showing processing executed by the secondary virtual storage system613 which has received a shutting-down instruction from the primary virtual storage system613 according to the eighth embodiment of this invention.
The processing ofFIG. 48 is executed by the secondary virtual storage system613 which has received the instruction transmitted inStep4709 ofFIG. 47. A case where the secondary virtual storage system613 is a virtual storage system (1′)613C as in the case ofFIG. 45 will be described below.
First, the virtual storage system (1′)613C receives a shutting-down instruction from the primary virtual storage system (1)613B (4801). This instruction is transmitted inStep4709 ofFIG. 47.
Next, the virtual storage system (1′)613C determines whether the virtual storage system (1′)613C is allocated to the other virtual machine602 (4802). The other virtual machine602 means a virtual machine602 to which thevirtual disk665 other than thevirtual disk665 belonging to the same pair of thevirtual disk665 allocated to the virtual machine602 shut down inStep4701 has been allocated among the virtual machines602 to which the virtual storage system (1′)613C has been allocated.
In the examples ofFIGS. 45 to 47, inStep4701, the virtual storage system (1)613B shuts down. Virtual disks (16)665C and (7)665D have been allocated to the virtual storage system (1)613B. The virtual disks (16′)665E and (17′)665F belong to the same pairs of the virtual disks (16)665C and (17)665D. Accordingly, a virtual machine602 to which avirtual disk665 other than the virtual disks (16′)665E and (17′)665F has been allocated is the other virtual machine602 inStep4802.
As shown inFIG. 45, when the virtual storage system (1′)613C does not includevirtual disks665 except for the virtual disks (16′)665E and (17′)665F, inStep4802, it is determined that the virtual storage system (1′)613C has not been allocated to the other virtual machine602.
If it is determined inStep4802 that the virtual storage system (1′)613C has been allocated to the other virtual machine602, the processing is finished without shutting down the virtual storage system (1′)613C.
On the other hand, if it is determined inStep4802 that the virtual storage system (1′)613C has not been allocated to the other virtual machine602, shutting-down of the virtual storage system (1′)613C is executed (4803) and the processing is finished. The shutting-down ofStep4803 is executed as in the case ofStep4711 ofFIG. 47.
InStep4803, thecontrol terminal150 may transmit a shutting-down instruction to the virtual machine (2)602C to which the virtual storage system (1′)613C to be shut down has been allocated.
According to the eighth embodiment, when a pair ofvirtual disks665 has been generated, and the virtual machine602 to which the primary virtual storage system613 has been allocated shuts down, the secondary virtual storage system613 belonging to the same pair of the primary virtual storage system613 (i.e., secondary virtual storage system613 including the secondaryvirtual disk665 of the copying destination of the data written in thevirtual disk665 of the primary virtual storage system613) is shut down. As a result, it is possible to reduce power consumption of the entire computer system.
Next, a ninth embodiment of this invention will be described.
The configuration of the first embodiment described above referring toFIGS. 1 to 16 is applied to the ninth embodiment except for differences described below. Herein, the differences of the ninth embodiment from the first will be described.
The computer system of the first embodiment includes onestorage system120, and the twoserver systems100 are connected to onestorage system120. On the other hand, a computer system of the ninth embodiment includes twostorage systems120. Onestorage system120 is externally connected to theother storage system120.
FIG. 49 is a functional block diagram of the computer system according to the ninth embodiment of this invention.
The computer system of the ninth embodiment includes a server system (0)100A, storage systems (0)120A and (1)120B, and acontrol terminal150.
A hardware configuration and a functional block diagram of the server system (0)100A are similar to those of the server system (0)100A of the first embodiment, and thus detailed description thereof will be omitted. The server system (0)100A includes virtual machines (0)602A and (1)602B.
Hardware configurations and functional block diagrams of the storage systems (0)120A and (1)120B are similar to those of thestorage system120 of the first embodiment, and thus detailed description thereof will be omitted.
The storage system (0)120A includes a virtual storage system (n)613E. The virtual storage system (n)613E includes virtual disks (121)665G, (122)665H, (300)665J, and (301)665K.
The storage system (1)120B includes a virtual storage system (n+1)613F. The virtual storage system (n+1)613F includes virtual disks (500)665L and (501)665M.
According to the ninth embodiment, the virtual storage system (n)613E is externally connected to the virtual storage system (n+1)613F. In the example ofFIG. 49, the virtual disk (500)665L is externally connected to the virtual disk (300)665J, and the virtual disk (501)665M is externally connected to the virtual disk (301)665K. In this case, nophysical disk drive148 in the storage system (0)120A is allocated to the virtual disks (300)665J or (301)665K.
For example, the virtual machine602 issues an access request (i.e., writing or reading request) to target the virtual disk (300)665J. The virtual storage system (n)613E that has received the access request converts the access request into an access request to the externally connected virtual disk (500)665L and transmit the converted access request to the virtual storage system (n+1)613F. The virtual storage system (n+1)613F that has received the access request executes access according to the request and transmit a response to the request to the virtual storage system (n)613E. The virtual storage system (n)613E that has received the response converts the response into a response from the virtual storage system (n)613E and transmit the converted response to the virtual machine602.
FIG. 50 is an explanatory diagram of a disk address translation table662 according to the ninth embodiment of this invention.
The disk address translation table662 holds information for managing a correlation between thevirtual disk665 in the virtual storage system613 and thephysical disk drive148 allocated to thevirtual disk665. Additionally, the disk address translation table662 holds information for managing a correlation between thevirtual disk665 in the virtual storage system613 and avirtual disk665 in another storage system613 externally connected to thevirtual disk665.
Specifically, the disk address translation table662 of the ninth embodiment includes six columns of a virtualstorage system number5001, avirtual disk number901, avirtual block address902, aphysical disk number903, aphysical block address904, and anexternal disk flag5002.
Thevirtual disk number901 and thevirtual block address902 are similar to those of the first embodiment, and thus description thereof will be omitted.
An identifier of thephysical disk drive148 allocated to thevirtual disk665 is registered in thephysical disk number903. However, when avirtual disk665 in another virtual storage system613 is externally connected to thevirtual disk665, an identifier of the externally connectedvirtual disk665 is registered in thephysical disk number903.
A physical block address for uniquely identifying a logical block of thephysical disk drive148 allocated to thevirtual disk665 in each physical disk drive is registered in thephysical block address904. However, when avirtual disk665 in another virtual storage system613 is externally connected to thevirtual disk665, a virtual block address of the externally connectedvirtual disk665 is registered in thephysical block address904.
An identifier of the virtual storage system613 including thevirtual disk665 indicated by thevirtual disk number901 is registered in the virtualstorage system number5001.
Information indicating external connection of anothervirtual disk665 to thevirtual disk665 indicated by thevirtual disk901 is registered in theexternal disk flag5002. In the example ofFIG. 50, when thevirtual disk665 is externally connected, “1” is registered in theexternal disk flag5002. In this case, an identifier of the externally connected virtual disk655 and a virtual block address in the externally connected virtual disk655 are registered in thephysical disk number903 and thephysical block address904, respectively.
As an example, referring toFIGS. 49 and 50, each column corresponding to a value “300” of thevirtual disk number901 will be described. In this example, “n” is registered in the virtualstorage system number5001. This indicates that the virtual disk (300)665J is included in the virtual storage system (n)613E.
“1” is registered in theexternal disk flag5002 corresponding to the virtual disk (300)665J. This indicates that anothervirtual disk665 is externally connected to the virtual disk (300)665J. In this case, an identifier of the externally connectedvirtual disk665 and a virtual block address are registered in thephysical disk number903 and thephysical block address904.
“0x00000000”, “500”, and “0x00000000” are registered in thevirtual block address902, thephysical disk number903, and thephysical block address904 corresponding to the virtual disk (300)665J, respectively. This indicates that the virtual disk (500)665L is externally connected to the virtual disk (300)665J, and the address “0x00000000” in the virtual disk (300)665J corresponds to an address “0x00000000” in the virtual disk (500)665L.
In this case, upon reception of an access request targeting the address “0x00000000” of the virtual disk (300)665J, the virtual storage system (n)613E transmits an access request obtained by converting the target of the received access request into an address “0x00000000” of the virtual disk (500)665L to the virtual storage system (n+1)613F.
FIG. 51 is an explanatory diagram of a storage resources control table621 according to the ninth embodiment of this invention.
The storage resources control table621 of the ninth embodiment includes ten columns of avirtual machine number1001, a virtualstorage system number1002, avirtual disk number1003, adisk cache capacity1004, aCPU1005 in charge, theinternal bandwidth1006, avirtual channel adaptor1007, achannel adaptor1008, an I/O adaptor1009, and a virtual I/O adaptor1010. Description of those columns will be omitted as those are similar to those of the first embodiment as shown inFIG. 10.
By assigning a unique identifier in the entire computer system including one or more virtual machines602 and one or more virtual storage systems613 to each resource in the computer system, it is possible to manage the resources in a unified manner (no matter which apparatus the resources belong to).
FIG. 52 is a flowchart of shutdown processing of the virtual machine602 executed according to the ninth embodiment of this invention.
The processing ofFIG. 52 is executed when the user cuts off power of one of the virtual machines602. The shutdown processing shown inFIG. 52 of the virtual machine602 of the ninth embodiment is similar to that of the first embodiment shown inFIG. 16 except for differences below. Differences of the shutdown processing of the virtual machine602 of the ninth embodiment from that of the first embodiment will be described below. Meanings of “relevant resource”, “relevant virtual storage system613” and the like are as described above referring toFIG. 16.
First, shutting-down of the virtual machine is executed (5201). This processing corresponds toSteps1601 to1610 ofFIG. 16, and thus description thereof will be omitted.
Steps5202 to5205 are respectively similar to Steps1611 to1614 ofFIG. 16, and thus description thereof will be omitted.
If it is determined inStep5205 that the relevant resource has not been allocated to the virtual storage system613 other than the relevant virtual storage system613, astorage hypervisor612 determines whether the relevant resource is an external disk (5206). “Relevant resource is external disk” means that the relevant virtual storage system613 converts an access request to the relevant resource into an access request to the externally connected virtual storage system to transmit the access request. When theexternal disk flag5002 corresponding to the relevant resource is “1”, it is determined inStep5206 that the relevant resource is an external disk.
If it is determined inStep5206 that the relevant resource is an external disk, the relevant resource is included in astorage system120 different from that including the relevant virtual storage system613. Accordingly, thestorage hypervisor612 of thestorage system120 including the relevant storage system613 cannot directly control power of the relevant resource. In this case, thestorage hypervisor612 determines whether the relevant resource is managed by one of the virtual storage systems613 (5207).
If it is determined inStep5207 that the relevant resource is managed by one of the virtual storage systems613, thestorage hypervisor612 transmits an instruction to power off the relevant resource to the virtual storage system613 which manages the relevant resource (5208).
On the other hand, if it is determined inStep5207 that the relevant resource is not managed by any virtual storage system613, the process proceeds to Step5210 without cutting off the power of the relevant resource.
If it is determined inStep5206 that the relevant resource is not an external disk, the relevant resource is included in thestorage system120 which includes the relevant virtual storage system613. In this case, the process proceeds to Step5209.Steps5209 to5212 are respectively similar toSteps1615 to1618 ofFIG. 16, and thus description thereof will be omitted.
Referring toFIGS. 49 to 52, a case of shutting-down the virtual machine (0)602A will be described. In this case, inStep5201, the shutting-down of the virtual machine (0)602A is finished. Virtual disks (121)665G, (122)665H, (300)665J, and (301)665K have been allocated to the virtual machine (0)602A as shown inFIG. 51.
Physical disk drives (8)148 and (9)148 have been allocated to the virtual disk (121)665G as shown inFIG. 50. A physical disk drive (10)148 has been allocated to the virtual disk (122)665H. An externally connected virtual disk (500)665L has been allocated to the virtual disk (300)665J. An externally connected virtual disk (501)665M has been allocated to the virtual disk (301)665K.
In this case, for the physical disk drives (8)148, (9)148, and (10)148, inStep5206, it is determined that the relevant resource is not an external disk. On the other hand, for the virtual disks (500)665L and (501)665M, inStep5206, it is determined that the relevant resource is an external disk.
According to the ninth embodiment of this invention, the externally connected virtual storage system613 is allocated to the virtual machine602. When the virtual machine602 shuts down, power of physical resources allocated to the externally connected virtual storage system613 is shut down. As a result, it is possible to reduce power consumption of the entire computer system.
Next, a tenth embodiment of this invention will be described.
The configuration of the first embodiment described above referring toFIGS. 1 to 16 is applied to the tenth embodiment except for differences described below. The differences of the tenth embodiment from the first embodiment will be described below.
According to the first to ninth embodiments, when the virtual storage system613 shuts down, it is determined whether the physical resources allocated to the device have been allocated to the other device. Then, the power of the physical resources not allocated to the other device is cut off as shown inFIG. 16 or the like. On the other hand, when even a resource allocated to the shut-down virtual storage system613 has been allocated to the other virtual storage system613, power of the resource cannot be cut off.
However, a load imposed on the resource allocated to the virtual storage system613 is to be reduced as a result of shutting down the virtual storage system613. Thus, it is possible to cut off power of resources unnecessary for the virtual storage system613 can be cut off according to the reduced load. In other words, only resources necessary for covering loads imposed by the running virtual storage system are left, and power of the other resources is cut off. Accordingly, it is possible to reduce power consumption without lowering performance of the virtual storage system.
Thus, according to the tenth embodiment, it is determined whether power of physical resources is powered based on a utilization rate of the physical resources. Referring toFIGS. 53 and 54, the tenth embodiment will be described below in detail.
FIG. 53 is a functional block diagram of a computer system according to the tenth embodiment.
The computer system of the tenth embodiment includes a server system (0)100A, a storage system (0)120, and acontrol terminal150.
A hardware configuration and a functional block diagram of the server system (0)100A are similar to those of the server system (0)100A of the first embodiment, and thus detailed description thereof will be omitted. The server system (0)100A includes virtual machines (0)602A and (1)602B.
A hardware configuration and a functional block diagram of the server system (0)120 are similar to those of thestorage system120 of the first embodiment, and thus detailed description thereof will be omitted.
The storage system (0)120 includes virtual storage systems (n)613E and (n+1)613F. CPU's (4)122A, (6)122C, (8)133A, and (10)133C have been allocated to the virtual storage system (n)613E. Similarly, CPU's (4)122A, (6)122C, (8)133A, and (10)133C have been allocated to the virtual storage system (n+1)613F. In other words, according to the tenth embodiment, eachCPU122 or the like has been allocated to two virtual storage systems613.
Presuming that the first embodiment is applied to the example ofFIG. 53, when the virtual machine (0)602A shuts down, the virtual storage system (n)613E allocated to the virtual machine (0)602A also shuts down. However, the CPU's (4)122A, (6)122C, (8)133A, and (10)133C have all been allocated to the virtual storage system (n+1)613F. Accordingly, power of these CPU's cannot be cut off.
However, when a load imposed on theCPU122 or the like because of the shutting-down of the virtual storage system (n)613E, the number of CPU's122 or the like necessary for covering the load is reduced.
For example, it is presumed inFIG. 53 that an average utilization rate of the four CPU's122 or the like is 100% as a result of imposing the same amount of loads on thestorage system120 by the virtual machines (0)602A and (1)602B. In this case, when the virtual machine (0)602A and the virtual storage system (n)613E shut down, the average utilization rate of the four CPU's122 or the like is reduced to 50%.
When power of the two of the four CPU's122 or the like is cut off, an average utilization rate of the remaining two CPU's122 or the like is expected to be 100%. In other words, the virtual machine (1)602B needs only two CPU's122. In other words, the load imposed on theCPU122 or the like can be covered by the remaining two CPU's122 or the like. Accordingly, by cutting off power of the two CPU's122 or the like, it is possible to reduce power consumption without lowering performance of the virtual storage system (n+1)613F. Referring toFIG. 54, processing thus executed will be described.
FIG. 54 is a flowchart of processing to power off physical resources based on a utilization rate executed according to the tenth embodiment of this invention.
Referring toFIG. 54, a case where the virtual storage system (n)613E shuts down inFIG. 53 will be described as an example.
First, shutting-down of the virtual storage system (n)613E is completed (5401). The completion of this shutting-down is equivalent to an end ofStep1618 ofFIG. 16.
Then, thestorage hypervisor612 specifies a resource allocated to the shut-down virtual storage system (n)613E (5402). This specification is executed as in the case ofStep1612 ofFIG. 16.
Next, thestorage hypervisor612 determines whether the resource specified in Step5402 (i.e., relevant resource) has also been allocated to the virtual storage system613 other than the shut-down virtual storage system (n)613E (5403). This determination is executed as in the case ofStep1614 ofFIG. 16.
If it is determined inStep5403 that the relevant resource has not been allocated to the virtual storage system613 other than the shut-down virtual storage system (n)613E, power of the relevant resource can be cut off. In this case, the process is finished without executing cutting off of the power inStep5406 described below.
On the other hand, if it is determined inStep5403 that the relevant resource has also been allocated to the virtual storage system613 other than the shut-down virtual storage system (n)613E, thestorage hypervisor612 determines whether there are a plurality of resources similar in kind to the relevant resource and capable of individual controlling of power (5404). In other words, it is determined whether the relevant resource includes a plurality of devices whose power can be individually controlled.
In the example ofFIG. 53, four CPU's122 or the like are allocated to the virtual storage system (n+1)613F. In this case, a CPU group constituted of four CPU's122 or the like includes a plurality of devices (i.e.,CPU122 or the like) whose power can individually be controlled. In this case, inStep5404, it is determined that a plurality of resources similar in kind to the relevant resource are present.
If it is determined inStep5403 that a plurality of resources similar in kind to the relevant resource are not present, the power of the relevant resource cannot be cut off. In this case, the process is finished without executing cutting off of the power inStep5406 described below.
On the other hand, if it is determined inStep5403 that a plurality of resources similar in kind to the relevant resource are present, thestorage hypervisor612 determines whether an average utilization rate u of the plurality of resources is equal to or less than 1−1/x (5405). Here, x is a total value of the number of the relevant resource detected inStep5404 and the resources similar in kind to the relevant resource.
If it is determined inStep5405 that the average utilization rate u is larger than 1−1/x, when the power of the relevant resource (or one of the resources of the similar kind) is cut off, the remaining resources cannot cover loads imposed thereon. Thus, the processing is finished without executing cutting-off of the power inStep5406 described below.
On the other hand, if it is determined inStep5405 that the average utilization rate u is equal to or less than 1−1/x, even when power of at least one resource is cut off, the remaining resources can cover loads imposed thereon. Thus, thestorage hypervisor612 cuts off power of x(1−u) resources among the relevant resource and the resources of the similar kind (5406). However, when x(1−u) is not an integer, values after the decimal point are discarded. An integer part of x(1−u) indicates the number of resources unnecessary for the virtual storage system613.
For example, when the average utilization rate of the four CPU's122 or the like is 60%, u is “0.6” and x is “4”. In this case, 1−1/x is 0.75, “yes” is determined inStep5405. Because x(1−u) is 1.6, power of oneCPU122 or the like is cut off inStep5406.
Thus, the process is finished.
According to the tenth embodiment of this invention, the processing ofFIG. 54 is executed after the processing ofFIG. 16 is finished. As a result, it is possible to cut off power of resources which are not necessary any more as the loads are reduced among resources whose power cannot be cut off by the processing ofFIG. 16.
Next, an eleventh embodiment of this invention will be described.
FIG. 55 is a functional block diagram of a computer system according to the eleventh embodiment of this invention.
According to the first to tenth embodiments described above, thehypervisor103 implements the plurality of virtual machines602 in one computer by setting the logical partitions in eachserver system100. Similarly, thestorage hypervisor612 implements the plurality of virtual storage systems613 in onestorage system120. However, these hypervisors can allocate physical resources of a plurality of devices to one virtual device (i.e., virtual machine602 or virtual storage system613) by setting logical partitions over the plurality of devices.
The computer system of the eleventh embodiment shown inFIG. 55 includesn server systems100 constituted of server systems (0)100A to (n−1)100C, andm storage systems120 constituted of storage systems (0)120A to (m−1)120C. Theserver system100 and thestorage system120 are connected to each other via a storage area network (SAN)5501. Additionally, theserver system100 and thestorage system120 are connected to acontrol terminal150 via anetwork170.
In the example ofFIG. 55, ahypervisor103 that manages all theserver systems100 realizes a plurality of virtual machines602. Specifically, virtual machines (0)602A, (1)602B, and (2)602C are set over the server systems (0)100A and (1)100B. Physical resources of the server systems (0)100A and (1)100B are allocated to these virtual machines602. Additionally, a virtual machine (3)602D to which physical resources of the server system (n−1)100C alone are allocated is set.
On the other hand, astorage hypervisor612 that manages all thestorage systems120 realizes a plurality of virtual storage systems613. Specifically, virtual storage systems (0)613A, (1)613B, and (2)613C are set over the storage systems (0)120A and (1)120B. Physical resources of the storage systems (0)120A and (1)120B are allocated to these virtual storage systems613. Additionally, a virtual storage system (3)613D to which physical resources of the storage system (m−1)120C alone is set.
Thus, the first to tenth embodiments can be applied to the computer system where the physical resources of the plurality of devices are allocated to the virtual machine602 and thevirtual storage system612. In this case, processing similar to the aforementioned processing is executed.