Movatterモバイル変換


[0]ホーム

URL:


CN105763488B - Data center aggregation core switch and backboard thereof - Google Patents

Data center aggregation core switch and backboard thereof
Download PDF

Info

Publication number
CN105763488B
CN105763488BCN201410788505.9ACN201410788505ACN105763488BCN 105763488 BCN105763488 BCN 105763488BCN 201410788505 ACN201410788505 ACN 201410788505ACN 105763488 BCN105763488 BCN 105763488B
Authority
CN
China
Prior art keywords
cpu
data center
chip
channel
core switch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410788505.9A
Other languages
Chinese (zh)
Other versions
CN105763488A (en
Inventor
班屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing ZTE New Software Co Ltd
Original Assignee
Nanjing ZTE New Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing ZTE New Software Co LtdfiledCriticalNanjing ZTE New Software Co Ltd
Priority to CN201410788505.9ApriorityCriticalpatent/CN105763488B/en
Publication of CN105763488ApublicationCriticalpatent/CN105763488A/en
Application grantedgrantedCritical
Publication of CN105763488BpublicationCriticalpatent/CN105763488B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

The invention relates to a backboard of a data center convergence core switch, which comprises: the control plane comprises a main control and a standby main control; each master control is connected with the service board card through a GE channel; wherein: each master is configured with one or more CPU processors. The invention improves the hardware performance of the control plane, solves the problem that the hardware resources on the control plane of the data center convergence core switch are insufficient, further meets the resource requirements of users on the management plane, and enables the users to better customize the management plane according to the self requirements; in addition, the main and standby main control switching chips can be seamlessly stacked and combined, the number of channels is expanded, the channel transmission rate is improved, at most 28 service boards are supported, the expansion of the service boards is facilitated, and the requirements of the data center in the big data cloud computing era are further met.

Description

Data center aggregation core switch and backboard thereof
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a data center convergence core switch using a multiprocessor management plane architecture and a backplane thereof.
Background
Currently, cloud computing has been widely used in the IT industry. Under the environment of cloud computing, computing resources, network resources and storage resources can be used as services to be distributed to users, so that resource sharing becomes more flexible and wider, the costs of hardware purchase, upgrading maintenance and the like of the users are reduced, and terminal users can use various types of clients and access various applications established on the cloud computing at any place. In theory, cloud computing users may achieve the same or even better user experience than traditional methods that apply the loading at the user's local terminal. To achieve such an effect, more and more new requirements are placed on a data center supporting cloud computing.
The data center convergence core switch is a switch product designed for meeting cloud computing, can realize the fusion bearing of heterogeneous networks of the data center, enables IP exchange, storage and high-performance operation to be uniformly carried through a brand-new lossless Ethernet, and simplifies network structure and management. Meanwhile, the data center aggregation core switch can also provide virtual switching capacity, and the requirements of cloud computing and network virtualization are met.
A common data center aggregation core switch is composed of a control plane, a data plane, and a monitoring plane. The control plane is responsible for functions of routing interaction protocol and calculation, system management, control of message receiving and sending and processing, communication with an upper computer and the like. Most of the control planes of the existing switch are single CPU frameworks, and the hardware resource limitation is realized in terms of both the running speed and the computing capacity. In addition, in terms of managing channels out-of-band between the control plane and the service plane, due to limitations on the number of channels and the transmission rate of the channels, the switching capacity and scalability of the service plane are affected.
Therefore, in order to better meet the user requirements and provide more management resources for the users, a multi-CPU management plane architecture is required to be applied to the data center aggregation core switch.
Disclosure of Invention
The invention mainly aims to provide a data center aggregation core switch and a backboard thereof, and aims to solve the problem that hardware resources on a control plane of the data center aggregation core switch are insufficient and further meet the resource requirements of users on a management plane.
In order to achieve the above object, the present invention provides a backplane of a data center aggregation core switch, including: the control plane comprises a main control and a standby main control; each master control is connected with the service board card through a GE channel; wherein: each master is configured with one or more CPU processors.
Preferably, the number of CPU processors of each master is configured into three forms of one, two and four.
Preferably, when the number of the CPU processors is configured as one, the CPU processor is a main CPU and communicates with the Patburg south bridge chip through the DMI bus; when the number of the CPU processors is two, one is a main CPU, the other is a slave CPU, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the slave CPU communicate through a QPI bus; when the number of the CPU processors is four, one of the CPU processors is a main CPU, the other three CPU processors are auxiliary CPUs, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the auxiliary CPU communicate through a QPI bus.
Preferably, each master controller is further configured with an ethernet controller and a switch chip; wherein:
the Ethernet controller is in communication connection with the main CPU through a PCIE interface, and the main CPU manages the Ethernet controller; the Ethernet controller provides two paths of Ethernet channels, wherein one path of Ethernet channel is used for communicating with a switching chip of the local end main control, and the other path of Ethernet channel is used for communicating with a switching chip of the opposite end main control; and the exchange chip is connected with the service board card through the GE channel.
Preferably, the switching chip provides two HIGIG links for switching chip communication between the active main control and the standby main control.
Preferably, the number of GE channels provided by the exchange chip reaches 28.
Preferably, the switching chip of the active main control and the switching chip of the standby main control are stacked through the two HIGIG links.
Preferably, after stacking, the switching chip of the main control performs link aggregation with the switching chip of the standby control to theGE channel 1 of the service board, the switching chip of the main control performs link aggregation with the switching chip of the standby control to theGE channel 2 of the service board, and so on, the switching chip of the main control performs link aggregation with the switching chip of the standby control to the GE channel 28 of the service board, and performs link aggregation with the switching chip of the standby control to the GE channel 28 of the service board.
Preferably, the highest rate of each HIGIG link is 42 Gbps.
The embodiment of the invention also provides a data center convergence core switch, which comprises a backboard and a service board card in communication connection with the backboard, wherein the backboard is the backboard.
According to the data center convergence core switch and the back plate thereof provided by the embodiment of the invention, the multiprocessor management plane architecture is applied to the data center convergence core switch, so that the hardware performance of a control plane is improved, the problem of insufficient hardware resources on the control plane of the data center convergence core switch is solved, the resource requirements of a user on the management plane are further met, and the user can better customize the management plane according to the self requirements; in addition, the switching chips of the main and standby main controls can be seamlessly stacked and combined, the number of channels is expanded, the channel transmission rate is improved, and the system performance is further improved.
Drawings
Fig. 1 is a schematic diagram of the control plane of a preferred embodiment of a data center aggregation core switch of the present invention;
fig. 2 is a schematic diagram of a system in which a data center aggregation core switch has 1 CPU according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a system in which a data center aggregation core switch has 2 CPUs according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a system in which a data center aggregation core switch has 4 CPUs according to an embodiment of the present invention.
In order to make the technical solution of the present invention clearer and clearer, the following detailed description is made with reference to the accompanying drawings.
Detailed Description
The solution of the embodiment of the invention is mainly as follows: the multiprocessor management plane architecture is applied to the data center convergence core switch, so that the hardware performance of the control plane is improved, the problem that hardware resources on the control plane of the data center convergence core switch are insufficient is solved, the resource requirements of a user on the management plane are further met, and the user can better customize the management plane according to the self requirements; in addition, the switching chips of the main and standby main controls can be seamlessly stacked and combined, so that the number of channels can be expanded, the transmission rate of the channels can be improved, and the system performance can be improved.
With the rise of cloud computing, the performance requirements on the data center convergence core exchange are higher and higher. In a data center switch, besides the needs of data plane message forwarding and service processing are improved, the data processing capability and the computing capability of a control plane are also required to be correspondingly improved. The existing data center switch only attaches importance to the data center to converge the switching capacity expansion of the data plane of the core switch and the improvement of the service processing capacity, but neglects the enhancement of the hardware performance of the control plane and cannot meet the customization of high requirements of users.
In order to solve the above problem, the present invention provides an application of a multiprocessor architecture in a data center aggregation core switch. The invention takes the control management plane of the data center convergence core switch as a considered object, and improves the whole hardware performance from the aspect of a hardware processor; meanwhile, the number of the CPUs can be customized according to the requirements of users, namely: in the same device, the hardware configuration of a single CPU may be selected, or the hardware configuration of multiple CPUs may be selected and used. Therefore, for manufacturers of the data center convergence core switch, only one control plane system needs to be researched and developed on hardware, but the system can be configured into various products with different heights, middle heights and low heights, and research and development cost is reduced. In the aspect of out-of-band management channels between a control plane and a service plane, the number of the out-of-band channels is expanded on hardware, so that the control plane can manage 28 service board cards at the same time to the maximum extent, and the advantages of facilitating the expansion of the service board cards and further meeting the requirements of a data center in the era of big data cloud computing are achieved; on the management data link, a 10G Ethernet channel is used for providing a standby link for the management channel; by using the stacking technology of HIGIG and the specification protocol of IEEE802.3ad, the link aggregation is performed between the control plane A (main control plane) and the control plane B (standby control plane) to the link on the same service board card, which has the advantages of improving the management channel bandwidth and enhancing the reliability of the system.
Specifically, referring to fig. 1, fig. 1 is a schematic diagram of a management plane of a preferred embodiment of a data center aggregation core switch of the present invention.
As shown in fig. 1, the data center aggregation core switch includes a backplane and a service board communicatively connected to the backplane, where:
the embodiment of the invention provides a backboard of a data center convergence core switch, which comprises: a control plane comprising two identical masters, master a and master B. One of the master control is the master control (corresponding to the master control plane), and the other is the standby master control (corresponding to the standby control plane), and the master control and the standby control are in a hot standby state.
Each master controller is connected with the service board card through a GE channel provided by a switching chip (such as a BCM56546 chip), and can support 28 service board cards to the maximum; wherein: each master may be configured with one or more CPU processors, i.e., the master employs a multi-processor configurable architecture.
In this embodiment, the system may use Intel Sandy Bridge-EN platform from Intel corporation, which includes Intel Sandy Bridge-EN family CPU, Patburg south Bridge chip.
In the control plane of the data center aggregation core switch, the number of CPU processors of each master can be configured to be 1, 2, and 4.
The concrete configuration is as follows:
(1) when the number of the CPU processors is 1, the CPU processor is a main CPU and communicates with a Patburg south bridge chip through a DMI (Direct Media Interface) bus;
(2) when the number of the CPU processors is 2, one of the CPU processors is a main CPU, the other CPU is a slave CPU, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the slave CPU communicate through a QPI (quick Path Interconnect) bus;
(3) and when the number of the CPU processors is 4, one of the CPU processors is a main CPU, the other 3 CPU processors are slave CPUs, the main CPU and the Patburg south bridge chip communicate through a DMI bus, and the main CPU and the slave CPU communicate through a QPI bus.
In the configuration of the CPU, each CPU has two pins, namely, SOCKET _ ID [0] and SOCKET _ ID [1], to configure the SOCKET _ ID number of the CPU, so that each CPU can distinguish the function of the CPU in the system when being started.
SOCKET _ ID [0] and SOCKET _ ID [1] can be pulled up to power supply VTT configured to 1; SOCKET _ ID [0] and SOCKET _ ID [1] can be configured with floating NC to 0 for a total of 2 states.
The SOCKET _ ID [1:0] of the CPU can be prepared into four IDs of 00, 01, 10 and 11.
In the control plane of the same data center convergence core switch, the circuit design only needs to be designed into the form of 4 CPUs. In practical application, according to configuration requirements, 1 CPU and CPU peripheral circuit can be selectively welded, or 2 CPUs and CPU peripheral circuits can be selectively welded, or 4 CPUs and CPU peripheral circuits can be selectively welded. That is, only one control plane needs to be developed, but 3 forms of products are possible. Flexible configuration is realized, research and development cost is reduced, and economic benefits are increased.
More specifically, as shown in fig. 2, the application of 4 CPUs in the control plane is shown in fig. 2, wherein:
theCPUs 0, 1, 2, and 3 of the control plane of the data center aggregation core switch use Intel sandy bridge platform processors with CPU0 as the master processor and CPU1, CPU2, and CPU3 as the slave processors.
The CPU0 of the control plane of the data center convergence core switch communicates with the south bridge chip Patburg via the DMI interface.
The CPU0 and the CPU1 of the control plane of the data center convergence core switch communicate through a QPI bus; the CPU0 and the CPU3 communicate through a QPI bus; the CPU1 and the CPU2 communicate through a QPI bus; communication between CPU2 and CPU3 is via the QPI bus.
The CPU0 of the control plane of the data center convergence core switch configures SOCKET _ ID [0] pin for NC processing to 0 and SOCKET _ ID [1] pin for NC processing to 0, so that the SOCKET _ ID [1:0] of the CPU0 is configured to 00.
The CPU1 of the control plane of the data center convergence core switch configures 1 by connecting the SOCKET _ ID [0] pin to VTT through a resistor R1 for pull-up processing and 0 by floating the SOCKET _ ID [1] pin for NC processing, so that the SOCKET _ ID [1:0] of the CPU1 is configured to 01.
The CPU2 of the control plane of the data center convergence core switch configures the SOCKET _ ID [0] pin to be 0 by floating for NC processing and configures the SOCKET _ ID [1] pin to be 1 by connecting the resistor R2 to VTT for pull-up processing, so that the SOCKET _ ID [1:0] of the CPU2 is configured to be 10.
The CPU3 of the control plane of the data center convergence core switch configures 1 for pull-up processing by connecting the SOCKET _ ID [1] pin to VTT through a resistor R4, and configures 1 for pull-up processing by connecting the SOCKET _ ID [1] pin to VTT through a resistor R3, so that SOCKET _ ID [1:0] of the CPU0 is configured to 11.
According to the requirement of a user, the architecture of 4 CPUs of the control plane of the data center aggregation core switch can be configured into a mode of 1 CPU by selecting a welding mode, as shown in fig. 3. The solid lines in fig. 3 indicate the need for welding and the dashed lines indicate no welding. Only the CPU0 and the Patburg south bridge chip are soldered in fig. 3; CPU1, CPU2, CPU3, R1, R2, R3, R4 are not soldered. Thus, the mode of 1 CPU is configured.
In addition, according to the requirement of the user, the architecture of 4 CPUs in the control plane of the data center aggregation core switch can be configured into a mode of 2 CPUs by selecting a welding mode, as shown in fig. 4. In fig. 4, the solid line indicates that welding is required, and the dotted line indicates that welding is not required. Only CPU0, CPU1, R1 and Patburg south bridge chip are soldered in FIG. 4; CPU2, CPU3, R2, R3 and R4 are not welded. Thus, a 2-CPU mode is configured.
Further, the control plane of the data center aggregation core switch in this embodiment adopts a high-speed out-of-band management channel with high reliability and easy expansion. The method comprises the following specific steps:
the main control system uses 10GbE dual-port Ethernet controllers, and each main control system is provided with an Ethernet controller and a switching chip; wherein:
the Ethernet controller is in communication connection with the main CPU through the PCIE interface, the main CPU manages the Ethernet controller, and the exchange chip is connected with the service board card through the GE channel.
The 10GbE dual-port Ethernet controller is provided with 2 paths of 10GbE Ethernet channels, wherein 1 path of 10GbE Ethernet channel is used for communicating with a switching chip of a local terminal main control, and the other 1 path of 10GbE Ethernet channel is used for communicating with a switching chip of an opposite terminal main control. The advantage of this is that a data channel with a rate of up to 10Gbps is provided between the CPU and the switch chip, which improves the management efficiency of the whole device; a data channel with the speed of 10Gbps is provided between the CPU of the local terminal and the switching chip of the main control of the opposite terminal, so that the management link channel between the main control and the service board card is a standby choice.
For example, when a 10G ethernet channel between the 10GbE dual-port ethernet controller of the main control a and the switch chip of the main control a fails, the CPU of the main control a cannot pass through: a CPU of a main control A, a 10GbE double-port Ethernet controller of the main control A, a switching chip of the main control A, 28 service board cards, wherein the link manages the service board cards; at this time, the following steps can be selected: the CPU of the main control A, the 10GbE double-port Ethernet controller of the main control A, the exchange chip of the main control B, 28 service boards, and the link manages the service boards. The advantage is that the main-standby switch between the main control is not needed, and the service is not influenced, thereby improving the reliability of the whole system.
In this embodiment, the master control system uses the exchange chip as the out-of-band management exchange chip, and the exchange chip provides 28 GE channels, so that each master control can have 28 GE channels to manage 28 service boards at most.
The switching chip is provided with 2 paths of 10GbE Ethernet channels, wherein 1 path of 10GbE Ethernet channel is used for communicating with the 10GbE Ethernet channel of the board, and the other 1 path of 10GBASE-KR is communicated with the switching chip of the opposite-end main control; the chip provides two HIGIGs (the highest HIGIG rate of each HIGIG can reach 42Gbps) for the communication of the switching chip between the main and standby main controls.
Furthermore, the two HIGIG links between the main control a and the main control B can stack the switch chip of the main control a and the switch chip of the main control B, and the highest rate of each of the two HIGIG links is 42Gbps, and the total rate of the two HIGIG links is 84Gbps, so that the two switch chips can be seamlessly stacked and combined.
After stacking, link aggregation is performed between the exchange chip of the main control a and the 1 st GE channel of the 28 GE channels of the service board, and between the exchange chip of the main control B and the 1 st GE channel of the 28 GE channels of the service board by using the IEEE802.3ad specification protocol, link aggregation is performed between the exchange chip of the main control a and the 2 nd GE channel of the 28 GE channels of the service board, and between the exchange chip of the main control B and the 2 nd GE channel of the 28 GE channels of the service board by using the IEEE802.3ad specification protocol, and so on, link aggregation is performed between the exchange chip of the main control a and the 28 th GE channel of the 28 GE channels of the service board, and between the exchange chip of the main control B and the 28 th GE channel of the 28 GE channels of the service board by using the IEEE802.3ad specification protocol.
This has the advantage that the bandwidth of the management channel between the master control and the service board card can be doubled after link aggregation; after the link aggregation, when a management channel from one master control to the service board card fails, the service board can be managed by using the management channel from the other master control, master-slave switching is not required, and the time required by master-slave switching is saved, so that the whole service can be continuously performed, and the reliability of the system is improved.
According to the embodiment of the invention, by applying the multiprocessor management plane architecture to the data center convergence core switch, the hardware performance of the control plane is improved, the problem of insufficient hardware resources on the control plane of the data center convergence core switch is solved, the resource requirement of a user on the management plane is further met, and the user can better customize the management plane according to the self requirement; in addition, the main and standby main control switching chips can be seamlessly stacked and combined, the number of channels is expanded, the channel transmission rate is improved, at most 28 service boards are supported, the expansion of the service boards is facilitated, the requirements of a data center in the era of big data cloud computing are further met, and the system performance is improved.
In addition, as shown in fig. 1, an embodiment of the present invention further provides a data center aggregation core switch, including a backplane and a service board communicatively connected to the backplane, where the backplane adopts a backplane having a control plane as described above, and the architecture and the implementation principle thereof are referred to the above embodiment, and are not described herein again.
According to the embodiment of the invention, by applying the multiprocessor management plane architecture to the data center convergence core switch, the hardware performance of the control plane is improved, the problem of insufficient hardware resources on the control plane of the data center convergence core switch is solved, the resource requirement of a user on the management plane is further met, and the user can better customize the management plane according to the self requirement; in addition, the main and standby main control switching chips can be seamlessly stacked and combined, the number of channels is expanded, the channel transmission rate is improved, at most 28 service boards are supported, the expansion of the service boards is facilitated, the requirements of a data center in the era of big data cloud computing are further met, and the system performance is improved.
The above description is only for the preferred embodiment of the present invention and is not intended to limit the scope of the present invention, and all equivalent structures or flow transformations made by the present specification and drawings, or applied directly or indirectly to other related arts, are included in the scope of the present invention.

Claims (9)

7. The backplane of the data center aggregation core switch of claim 6, wherein after stacking, the switching chip of the primary master control performs link aggregation to the 1 st GE channel of the service board, and performs link aggregation with the switching chip of the backup master control to the 1 st GE channel of the service board, the switching chip of the primary master control performs link aggregation to the 2 nd GE channel of the service board, and performs link aggregation with the switching chip of the backup master control to the 2 nd GE channel of the service board, and so on, the switching chip of the primary master control performs link aggregation with the switching chip of the backup master control to the 28 th GE channel of the service board, and performs link aggregation with the switching chip of the backup master control to the 28 th GE channel of the service board.
CN201410788505.9A2014-12-172014-12-17Data center aggregation core switch and backboard thereofActiveCN105763488B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410788505.9ACN105763488B (en)2014-12-172014-12-17Data center aggregation core switch and backboard thereof

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410788505.9ACN105763488B (en)2014-12-172014-12-17Data center aggregation core switch and backboard thereof

Publications (2)

Publication NumberPublication Date
CN105763488A CN105763488A (en)2016-07-13
CN105763488Btrue CN105763488B (en)2020-08-25

Family

ID=56340193

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410788505.9AActiveCN105763488B (en)2014-12-172014-12-17Data center aggregation core switch and backboard thereof

Country Status (1)

CountryLink
CN (1)CN105763488B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106453153A (en)*2016-08-292017-02-22河南铭视科技股份有限公司24-channel optical switch circuit
CN108234308B (en)*2016-12-142022-04-01迈普通信技术股份有限公司Distributed equipment internal communication system and method
CN108092780A (en)*2018-01-042018-05-29河南铭视科技股份有限公司A kind of data center's convergence switch
CN111030950B (en)*2019-11-302021-07-27苏州浪潮智能科技有限公司 A kind of stack switch topology construction method and device
CN112332942A (en)*2020-12-022021-02-05天津光电通信技术有限公司Master control backup equipment and method in optical signal convergence processing equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1725693A (en)*2005-05-082006-01-25杭州华为三康技术有限公司 Management information interaction system and dedicated interface card and main control card in network equipment
CN1859022A (en)*2006-03-242006-11-08华为技术有限公司Communication device and method for realizing master control board and service board master and slave conversion
CN1916849A (en)*2006-09-042007-02-21华为技术有限公司Method for initializing system of multile processors, and system of multile processors
CN101052013A (en)*2007-05-222007-10-10杭州华三通信技术有限公司Method and system for realizing network equipment internal managing path
CN101197851A (en)*2008-01-082008-06-11杭州华三通信技术有限公司 A method and system for realizing centralized control plane and distributed data plane
CN102185753A (en)*2011-01-302011-09-14广东佳和通信技术有限公司Device for realizing dual-backup switching of Ethernet link inside communication equipment
CN103188173A (en)*2011-12-282013-07-03迈普通信技术股份有限公司Switch equipment
CN103763219A (en)*2013-12-302014-04-30上海斐讯数据通信技术有限公司Method for stacking chips of exchanger
CN104038359A (en)*2013-03-062014-09-10中兴通讯股份有限公司Virtual exchange stack system managing method and virtual exchange stack system managing device
CN104065499A (en)*2013-03-192014-09-24鼎点视讯科技有限公司 Main control board, master-standby system, information backup method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1725693A (en)*2005-05-082006-01-25杭州华为三康技术有限公司 Management information interaction system and dedicated interface card and main control card in network equipment
CN1859022A (en)*2006-03-242006-11-08华为技术有限公司Communication device and method for realizing master control board and service board master and slave conversion
CN1916849A (en)*2006-09-042007-02-21华为技术有限公司Method for initializing system of multile processors, and system of multile processors
CN101052013A (en)*2007-05-222007-10-10杭州华三通信技术有限公司Method and system for realizing network equipment internal managing path
CN101197851A (en)*2008-01-082008-06-11杭州华三通信技术有限公司 A method and system for realizing centralized control plane and distributed data plane
CN102185753A (en)*2011-01-302011-09-14广东佳和通信技术有限公司Device for realizing dual-backup switching of Ethernet link inside communication equipment
CN103188173A (en)*2011-12-282013-07-03迈普通信技术股份有限公司Switch equipment
CN104038359A (en)*2013-03-062014-09-10中兴通讯股份有限公司Virtual exchange stack system managing method and virtual exchange stack system managing device
CN104065499A (en)*2013-03-192014-09-24鼎点视讯科技有限公司 Main control board, master-standby system, information backup method and device
CN103763219A (en)*2013-12-302014-04-30上海斐讯数据通信技术有限公司Method for stacking chips of exchanger

Also Published As

Publication numberPublication date
CN105763488A (en)2016-07-13

Similar Documents

PublicationPublication DateTitle
US9300574B2 (en)Link aggregation emulation for virtual NICs in a cluster server
US9264346B2 (en)Resilient duplicate link aggregation emulation
US8843688B2 (en)Concurrent repair of PCIE switch units in a tightly-coupled, multi-switch, multi-adapter, multi-host distributed system
US12223358B2 (en)Connecting accelerator resources using a switch
CN101710314B (en)High-speed peripheral component interconnection switching controller and realizing method thereof
CN105763488B (en)Data center aggregation core switch and backboard thereof
EP3590046A1 (en)DYNAMIC PARTITION OF PCLe DISK ARRAYS BASED ON SOFTWARE CONFIGURATION / POLICY DISTRIBUTION
EP2605451B1 (en)Node controller link switching method, processor system and node
US20160292115A1 (en)Methods and Apparatus for IO, Processing and Memory Bandwidth Optimization for Analytics Systems
US11232006B2 (en)Server system
US20240357010A1 (en)Server system
WO2015131516A1 (en)Distributed intelligent platform management bus connection method and atca frame
CN105099776A (en)Cloud server management system
CN119883988B (en) A server and control method
CN102103471A (en)Data transmission method and system
CN102763087A (en)Method and system for realizing interconnection fault-tolerance between cpus
CN105824374A (en)Framework of binary star type server
CN1964286B (en) Master device with dual CPU
EP3355525B1 (en)Computing apparatus, node device, and server
CN102880583A (en)Device and method for configuring dynamic link of multi-way server
CN207022032U (en)A kind of business line card and the communication system based on PCIE bus backplanes
CN107122268B (en)NUMA-based multi-physical-layer partition processing system
CN119109890A (en) A network device and dual master control switching method
CN117092902A (en)Multi-data channel backboard, multi-data channel management method and system
CN105259979A (en)Mixed insertion blade server

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right
TA01Transfer of patent application right

Effective date of registration:20200715

Address after:210000 No. 68 Bauhinia Road, Yuhuatai District, Jiangsu, Nanjing

Applicant after:Nanjing Zhongxing Software Co.,Ltd.

Address before:518057 Nanshan District Guangdong high tech Industrial Park, South Road, science and technology, ZTE building, Ministry of Justice

Applicant before:ZTE Corp.

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp