Disclosure of Invention
The invention mainly aims to provide a data center aggregation core switch and a backboard thereof, and aims to solve the problem that hardware resources on a control plane of the data center aggregation core switch are insufficient and further meet the resource requirements of users on a management plane.
In order to achieve the above object, the present invention provides a backplane of a data center aggregation core switch, including: the control plane comprises a main control and a standby main control; each master control is connected with the service board card through a GE channel; wherein: each master is configured with one or more CPU processors.
Preferably, the number of CPU processors of each master is configured into three forms of one, two and four.
Preferably, when the number of the CPU processors is configured as one, the CPU processor is a main CPU and communicates with the Patburg south bridge chip through the DMI bus; when the number of the CPU processors is two, one is a main CPU, the other is a slave CPU, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the slave CPU communicate through a QPI bus; when the number of the CPU processors is four, one of the CPU processors is a main CPU, the other three CPU processors are auxiliary CPUs, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the auxiliary CPU communicate through a QPI bus.
Preferably, each master controller is further configured with an ethernet controller and a switch chip; wherein:
the Ethernet controller is in communication connection with the main CPU through a PCIE interface, and the main CPU manages the Ethernet controller; the Ethernet controller provides two paths of Ethernet channels, wherein one path of Ethernet channel is used for communicating with a switching chip of the local end main control, and the other path of Ethernet channel is used for communicating with a switching chip of the opposite end main control; and the exchange chip is connected with the service board card through the GE channel.
Preferably, the switching chip provides two HIGIG links for switching chip communication between the active main control and the standby main control.
Preferably, the number of GE channels provided by the exchange chip reaches 28.
Preferably, the switching chip of the active main control and the switching chip of the standby main control are stacked through the two HIGIG links.
Preferably, after stacking, the switching chip of the main control performs link aggregation with the switching chip of the standby control to theGE channel 1 of the service board, the switching chip of the main control performs link aggregation with the switching chip of the standby control to theGE channel 2 of the service board, and so on, the switching chip of the main control performs link aggregation with the switching chip of the standby control to the GE channel 28 of the service board, and performs link aggregation with the switching chip of the standby control to the GE channel 28 of the service board.
Preferably, the highest rate of each HIGIG link is 42 Gbps.
The embodiment of the invention also provides a data center convergence core switch, which comprises a backboard and a service board card in communication connection with the backboard, wherein the backboard is the backboard.
According to the data center convergence core switch and the back plate thereof provided by the embodiment of the invention, the multiprocessor management plane architecture is applied to the data center convergence core switch, so that the hardware performance of a control plane is improved, the problem of insufficient hardware resources on the control plane of the data center convergence core switch is solved, the resource requirements of a user on the management plane are further met, and the user can better customize the management plane according to the self requirements; in addition, the switching chips of the main and standby main controls can be seamlessly stacked and combined, the number of channels is expanded, the channel transmission rate is improved, and the system performance is further improved.
Detailed Description
The solution of the embodiment of the invention is mainly as follows: the multiprocessor management plane architecture is applied to the data center convergence core switch, so that the hardware performance of the control plane is improved, the problem that hardware resources on the control plane of the data center convergence core switch are insufficient is solved, the resource requirements of a user on the management plane are further met, and the user can better customize the management plane according to the self requirements; in addition, the switching chips of the main and standby main controls can be seamlessly stacked and combined, so that the number of channels can be expanded, the transmission rate of the channels can be improved, and the system performance can be improved.
With the rise of cloud computing, the performance requirements on the data center convergence core exchange are higher and higher. In a data center switch, besides the needs of data plane message forwarding and service processing are improved, the data processing capability and the computing capability of a control plane are also required to be correspondingly improved. The existing data center switch only attaches importance to the data center to converge the switching capacity expansion of the data plane of the core switch and the improvement of the service processing capacity, but neglects the enhancement of the hardware performance of the control plane and cannot meet the customization of high requirements of users.
In order to solve the above problem, the present invention provides an application of a multiprocessor architecture in a data center aggregation core switch. The invention takes the control management plane of the data center convergence core switch as a considered object, and improves the whole hardware performance from the aspect of a hardware processor; meanwhile, the number of the CPUs can be customized according to the requirements of users, namely: in the same device, the hardware configuration of a single CPU may be selected, or the hardware configuration of multiple CPUs may be selected and used. Therefore, for manufacturers of the data center convergence core switch, only one control plane system needs to be researched and developed on hardware, but the system can be configured into various products with different heights, middle heights and low heights, and research and development cost is reduced. In the aspect of out-of-band management channels between a control plane and a service plane, the number of the out-of-band channels is expanded on hardware, so that the control plane can manage 28 service board cards at the same time to the maximum extent, and the advantages of facilitating the expansion of the service board cards and further meeting the requirements of a data center in the era of big data cloud computing are achieved; on the management data link, a 10G Ethernet channel is used for providing a standby link for the management channel; by using the stacking technology of HIGIG and the specification protocol of IEEE802.3ad, the link aggregation is performed between the control plane A (main control plane) and the control plane B (standby control plane) to the link on the same service board card, which has the advantages of improving the management channel bandwidth and enhancing the reliability of the system.
Specifically, referring to fig. 1, fig. 1 is a schematic diagram of a management plane of a preferred embodiment of a data center aggregation core switch of the present invention.
As shown in fig. 1, the data center aggregation core switch includes a backplane and a service board communicatively connected to the backplane, where:
the embodiment of the invention provides a backboard of a data center convergence core switch, which comprises: a control plane comprising two identical masters, master a and master B. One of the master control is the master control (corresponding to the master control plane), and the other is the standby master control (corresponding to the standby control plane), and the master control and the standby control are in a hot standby state.
Each master controller is connected with the service board card through a GE channel provided by a switching chip (such as a BCM56546 chip), and can support 28 service board cards to the maximum; wherein: each master may be configured with one or more CPU processors, i.e., the master employs a multi-processor configurable architecture.
In this embodiment, the system may use Intel Sandy Bridge-EN platform from Intel corporation, which includes Intel Sandy Bridge-EN family CPU, Patburg south Bridge chip.
In the control plane of the data center aggregation core switch, the number of CPU processors of each master can be configured to be 1, 2, and 4.
The concrete configuration is as follows:
(1) when the number of the CPU processors is 1, the CPU processor is a main CPU and communicates with a Patburg south bridge chip through a DMI (Direct Media Interface) bus;
(2) when the number of the CPU processors is 2, one of the CPU processors is a main CPU, the other CPU is a slave CPU, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the slave CPU communicate through a QPI (quick Path Interconnect) bus;
(3) and when the number of the CPU processors is 4, one of the CPU processors is a main CPU, the other 3 CPU processors are slave CPUs, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the slave CPU communicate through a QPI bus.
In the configuration of the CPU, each CPU has two pins, namely, SOCKET _ ID [0] and SOCKET _ ID [1], to configure the SOCKET _ ID number of the CPU, so that each CPU can distinguish the function of the CPU in the system when being started.
SOCKET _ ID [0] and SOCKET _ ID [1] can be pulled up to power supply VTT configured to 1; SOCKET _ ID [0] and SOCKET _ ID [1] can be configured with floating NC to 0 for a total of 2 states.
The SOCKET _ ID [1:0] of the CPU can be prepared into four IDs of 00, 01, 10 and 11.
In the control plane of the same data center convergence core switch, the circuit design only needs to be designed into the form of 4 CPUs. In practical application, according to configuration requirements, 1 CPU and CPU peripheral circuit can be selectively welded, or 2 CPUs and CPU peripheral circuits can be selectively welded, or 4 CPUs and CPU peripheral circuits can be selectively welded. That is, only one control plane needs to be developed, but 3 forms of products are possible. Flexible configuration is realized, research and development cost is reduced, and economic benefits are increased.
More specifically, as shown in fig. 2, the application of 4 CPUs in the control plane is shown in fig. 2, wherein:
theCPUs 0, 1, 2, and 3 of the control plane of the data center aggregation core switch use Intel sandy bridge platform processors with CPU0 as the master processor and CPU1, CPU2, and CPU3 as the slave processors.
The CPU0 of the control plane of the data center convergence core switch communicates with the south bridge chip Patburg via the DMI interface.
The CPU0 and the CPU1 of the control plane of the data center convergence core switch communicate through a QPI bus; the CPU0 and the CPU3 communicate through a QPI bus; the CPU1 and the CPU2 communicate through a QPI bus; communication between CPU2 and CPU3 is via the QPI bus.
The CPU0 of the control plane of the data center convergence core switch configures SOCKET _ ID [0] pin for NC processing to 0 and SOCKET _ ID [1] pin for NC processing to 0, so that the SOCKET _ ID [1:0] of the CPU0 is configured to 00.
The CPU1 of the control plane of the data center convergence core switch configures 1 by connecting the SOCKET _ ID [0] pin to VTT through a resistor R1 for pull-up processing and 0 by floating the SOCKET _ ID [1] pin for NC processing, so that the SOCKET _ ID [1:0] of the CPU1 is configured to 01.
The CPU2 of the control plane of the data center convergence core switch configures the SOCKET _ ID [0] pin to be 0 by floating for NC processing and configures the SOCKET _ ID [1] pin to be 1 by connecting the resistor R2 to VTT for pull-up processing, so that the SOCKET _ ID [1:0] of the CPU2 is configured to be 10.
The CPU3 of the control plane of the data center convergence core switch configures 1 for pull-up processing by connecting the SOCKET _ ID [1] pin to VTT through a resistor R4, and configures 1 for pull-up processing by connecting the SOCKET _ ID [1] pin to VTT through a resistor R3, so that SOCKET _ ID [1:0] of the CPU0 is configured to 11.
According to the requirement of a user, the architecture of 4 CPUs of the control plane of the data center aggregation core switch can be configured into a mode of 1 CPU by selecting a welding mode, as shown in fig. 3. The solid lines in fig. 3 indicate the need for welding and the dashed lines indicate no welding. Only the CPU0 and the Patburg south bridge chip are soldered in fig. 3; CPU1, CPU2, CPU3, R1, R2, R3, R4 are not soldered. Thus, the mode of 1 CPU is configured.
In addition, according to the requirement of the user, the architecture of 4 CPUs in the control plane of the data center aggregation core switch can be configured into a mode of 2 CPUs by selecting a welding mode, as shown in fig. 4. In fig. 4, the solid line indicates that welding is required, and the dotted line indicates that welding is not required. Only CPU0, CPU1, R1 and Patburg south bridge chip are soldered in FIG. 4; CPU2, CPU3, R2, R3 and R4 are not welded. Thus, a 2-CPU mode is configured.
Further, the control plane of the data center aggregation core switch in this embodiment adopts a high-speed out-of-band management channel with high reliability and easy expansion. The method comprises the following specific steps:
the main control system uses 10GbE dual-port Ethernet controllers, and each main control system is provided with an Ethernet controller and a switching chip; wherein:
the Ethernet controller is in communication connection with the main CPU through the PCIE interface, the main CPU manages the Ethernet controller, and the exchange chip is connected with the service board card through the GE channel.
The 10GbE dual-port Ethernet controller is provided with 2 paths of 10GbE Ethernet channels, wherein 1 path of 10GbE Ethernet channel is used for communicating with a switching chip of a local terminal main control, and the other 1 path of 10GbE Ethernet channel is used for communicating with a switching chip of an opposite terminal main control. The advantage of this is that a data channel with a rate of up to 10Gbps is provided between the CPU and the switch chip, which improves the management efficiency of the whole device; a data channel with the speed of 10Gbps is provided between the CPU of the local terminal and the switching chip of the main control of the opposite terminal, so that the management link channel between the main control and the service board card is a standby choice.
For example, when a 10G ethernet channel between the 10GbE dual-port ethernet controller of the main control a and the switch chip of the main control a fails, the CPU of the main control a cannot pass through: a CPU of a main control A, a 10GbE double-port Ethernet controller of the main control A, a switching chip of the main control A, 28 service board cards, wherein the link manages the service board cards; at this time, the following steps can be selected: the CPU of the main control A, the 10GbE double-port Ethernet controller of the main control A, the exchange chip of the main control B, 28 service boards, and the link manages the service boards. The advantage is that the main-standby switch between the main control is not needed, and the service is not influenced, thereby improving the reliability of the whole system.
In this embodiment, the master control system uses the exchange chip as the out-of-band management exchange chip, and the exchange chip provides 28 GE channels, so that each master control can have 28 GE channels to manage 28 service boards at most.
The switching chip is provided with 2 paths of 10GbE Ethernet channels, wherein 1 path of 10GbE Ethernet channel is used for communicating with the 10GbE Ethernet channel of the board, and the other 1 path of 10GBASE-KR is communicated with the switching chip of the opposite-end main control; the chip provides two HIGIGs (the highest HIGIG rate of each HIGIG can reach 42Gbps) for the communication of the switching chip between the main and standby main controls.
Furthermore, the two HIGIG links between the main control a and the main control B can stack the switch chip of the main control a and the switch chip of the main control B, and the highest rate of each of the two HIGIG links is 42Gbps, and the total rate of the two HIGIG links is 84Gbps, so that the two switch chips can be seamlessly stacked and combined.
After stacking, link aggregation is performed between the exchange chip of the main control a and the 1 st GE channel of the 28 GE channels of the service board, and between the exchange chip of the main control B and the 1 st GE channel of the 28 GE channels of the service board by using the IEEE802.3ad specification protocol, link aggregation is performed between the exchange chip of the main control a and the 2 nd GE channel of the 28 GE channels of the service board, and between the exchange chip of the main control B and the 2 nd GE channel of the 28 GE channels of the service board by using the IEEE802.3ad specification protocol, and so on, link aggregation is performed between the exchange chip of the main control a and the 28 th GE channel of the 28 GE channels of the service board, and between the exchange chip of the main control B and the 28 th GE channel of the 28 GE channels of the service board by using the IEEE802.3ad specification protocol.
This has the advantage that the bandwidth of the management channel between the master control and the service board card can be doubled after link aggregation; after the link aggregation, when a management channel from one master control to the service board card fails, the service board can be managed by using the management channel from the other master control, master-slave switching is not required, and the time required by master-slave switching is saved, so that the whole service can be continuously performed, and the reliability of the system is improved.
According to the embodiment of the invention, by applying the multiprocessor management plane architecture to the data center convergence core switch, the hardware performance of the control plane is improved, the problem of insufficient hardware resources on the control plane of the data center convergence core switch is solved, the resource requirement of a user on the management plane is further met, and the user can better customize the management plane according to the self requirement; in addition, the main and standby main control switching chips can be seamlessly stacked and combined, the number of channels is expanded, the channel transmission rate is improved, at most 28 service boards are supported, the expansion of the service boards is facilitated, the requirements of a data center in the era of big data cloud computing are further met, and the system performance is improved.
In addition, as shown in fig. 1, an embodiment of the present invention further provides a data center aggregation core switch, including a backplane and a service board communicatively connected to the backplane, where the backplane adopts a backplane having a control plane as described above, and the architecture and the implementation principle thereof are referred to the above embodiment, and are not described herein again.
According to the embodiment of the invention, by applying the multiprocessor management plane architecture to the data center convergence core switch, the hardware performance of the control plane is improved, the problem of insufficient hardware resources on the control plane of the data center convergence core switch is solved, the resource requirement of a user on the management plane is further met, and the user can better customize the management plane according to the self requirement; in addition, the main and standby main control switching chips can be seamlessly stacked and combined, the number of channels is expanded, the channel transmission rate is improved, at most 28 service boards are supported, the expansion of the service boards is facilitated, the requirements of a data center in the era of big data cloud computing are further met, and the system performance is improved.
The above description is only for the preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and all equivalent structures or flow transformations made by the present specification and drawings, or applied directly or indirectly to other related arts, are included in the scope of the present invention.