Movatterモバイル変換


[0]ホーム

URL:


CN119011386A - Communication method, device, equipment, system and readable storage medium - Google Patents

Communication method, device, equipment, system and readable storage medium
Download PDF

Info

Publication number
CN119011386A
CN119011386ACN202310980555.6ACN202310980555ACN119011386ACN 119011386 ACN119011386 ACN 119011386ACN 202310980555 ACN202310980555 ACN 202310980555ACN 119011386 ACN119011386 ACN 119011386A
Authority
CN
China
Prior art keywords
clusters
application
computing node
instance
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310980555.6A
Other languages
Chinese (zh)
Inventor
付萌
杨旭炜
高家睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co LtdfiledCriticalHuawei Cloud Computing Technologies Co Ltd
Priority to PCT/CN2024/093508priorityCriticalpatent/WO2024235272A1/en
Publication of CN119011386ApublicationCriticalpatent/CN119011386A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请提供一种通信方法、装置、设备、系统及可读存储介质,涉及通信领域。该方法包括:在向计算节点下发配置表项时,将至少两个计算节点包含的多个实例进行聚类,得到至少两个类簇。然后根据至少两个类簇间的流量交互信息确定配置表项转发关系,配置表项转发关系用于指示向第一计算节点下发与第一计算节点的类簇存在流量交互的其他类簇的配置表项。再根据配置表项转发关系向至少两个计算节点发送配置表项。如此,在下发配置表项时无需下发全部实例的配置表项,也无需通过转发节点间接下发配置表项,降低了通信时延,提高了整体通信效率。

The present application provides a communication method, apparatus, device, system and readable storage medium, which relate to the field of communication. The method includes: when sending configuration table items to computing nodes, clustering multiple instances contained in at least two computing nodes to obtain at least two clusters. Then, the configuration table item forwarding relationship is determined based on the traffic interaction information between at least two clusters, and the configuration table item forwarding relationship is used to indicate that the configuration table items of other clusters that have traffic interaction with the cluster of the first computing node are sent to the first computing node. Then, the configuration table items are sent to at least two computing nodes based on the configuration table item forwarding relationship. In this way, when sending configuration table items, there is no need to send configuration table items of all instances, nor is there any need to send configuration table items indirectly through forwarding nodes, which reduces communication delay and improves overall communication efficiency.

Description

Communication method, device, equipment, system and readable storage medium
The present application claims priority from the chinese patent application filed on day 15 of 2023, 05, with application number 202310551806.9, entitled "communication method, apparatus, device, system, and readable storage medium", the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of communications, and in particular, to a communication method, apparatus, device, system, and readable storage medium.
Background
In a cloud platform built based on a Software Defined Network (SDN), when the cloud platform is used for creating a network, applying for resources and changing the network, configuration needs to be issued to data plane equipment through a control plane. The traditional configuration issuing mode comprises overall issuing and on-demand issuing, wherein the overall issuing is that a controller issues configuration table items of all examples in a cloud platform to a virtual switch of one computing node for forwarding, the on-demand issuing is that the forwarding node records the configuration table items of all the examples in the cloud platform, and the forwarding node matches and issues the configuration table items of the examples.
However, the overall issuing mode takes longer time for the first time of the list item accessed by the instance when the number of configuration list items is large, and the forwarding node is required to forward the configuration list item according to the issuing mode, so that the first packet delay is larger. Therefore, the conventional configuration issuing method has the problem of low communication efficiency.
Disclosure of Invention
The embodiment of the application provides a communication method, a device, equipment, a system and a readable storage medium, which can solve the problem that the issuing efficiency of configuration list items is low in configuration issuing.
In a first aspect, a communication method is provided. The communication method comprises the following steps: first, a plurality of examples contained in at least two computing nodes are clustered to obtain at least two class clusters. And then, determining a configuration list item forwarding relation according to the flow interaction information between at least two class clusters, namely, issuing configuration list items of other class clusters with flow interaction with the class cluster of the first computing node to the first computing node, wherein the first computing node is one of the at least two computing nodes. And then sending the configuration list items to at least two computing nodes according to the forwarding relation of the configuration list items.
In a possible implementation of the present application, each of the at least two class clusters includes at least one instance. The configuration entry forwarding relationship is used to indicate to which compute node the configuration entry for each instance is issued. Issuing a configuration entry refers to issuing the configuration entry of an instance to a data plane device (e.g., virtual switch) of a computing node to which the instance corresponds.
Alternatively, an example is a container (docker), a cluster-like is an application where multiple containers belong together.
Based on the communication method, through flow interaction among the class clusters to which each instance belongs, each class cluster needing to be accessed is determined, and for each class cluster, a configuration table item is issued to a computing node to which the class cluster needing to be communicated with the class cluster belongs, so that the instances contained in each class cluster can be accessed mutually. In this way, compared with the situation that the total distribution of the configuration list items needs to distribute a large amount of configuration list items to a certain computing node for configuration in a scene of large virtual private cloud scale, the method does not need to distribute the configuration list items of all instances when the certain computing node is cold started. In the application, the configuration list item is not required to be indirectly issued through the forwarding node, and the configuration list item is required to be learned through the forwarding node which records all the configuration list items relative to the on-demand issuing instance of the configuration list item. Therefore, compared with the full-volume issuing and the on-demand issuing modes of the configuration list items, the communication method reduces network communication time delay caused by long time consumption of forwarding the configuration list items or issuing the configuration list items between the instances, and improves the overall communication efficiency.
As one possible implementation, configuring the table entry forwarding relationship includes: and issuing a configuration table item of an instance contained in a class cluster corresponding to the first computing node, wherein the corresponding class cluster comprises class clusters with traffic interaction with the class cluster to which the instance contained in the first computing node belongs.
As one possible implementation, configuring the table entry forwarding relationship further includes: and issuing a configuration table item of an instance contained in the first computing node to the first computing node.
In this way, the traffic interaction information based on the class clusters shows that the computing node to which the class clusters belong needs to communicate with examples contained in which class clusters, so that access among examples of which class clusters needs to be supported by each computing node is determined, and accordingly configuration items of examples contained in the class clusters which the examples of the computing node can possibly access later are issued to the computing node.
As a possible implementation manner, before determining the forwarding relation of the configuration table item according to the traffic interaction information among the class clusters, the traffic interaction information among at least two class clusters needs to be determined according to the traffic interaction information among a plurality of instances.
Optionally, firstly, acquiring flow interaction information among a plurality of examples, and then determining that each two examples with flow interaction respectively belong to the class clusters with flow interaction, thereby obtaining flow interaction information among at least two class clusters.
For example, traffic interaction information between multiple instances may be obtained from a flow table of a data plane device of a computing node.
As another example, traffic interaction information between multiple instances may be obtained from connection tracking (connection tracking, CT) entries of a computing node.
The present application is not limited to a clustering standard of multiple instances, and the clustering standard can divide the instances implementing the same function into the same class of clusters, for example, multiple instances belonging to the same application are divided into the same class of clusters (or referred to as applications).
As a first possible implementation, instances of the same application or of similar applications are typically partitioned into the same security group, and the criteria for clustering include: the instances contained in the same security group are divided into a class cluster.
As a second possible implementation manner, applications in the cloud native scene are deployed in only one subnet, and applications cannot access across subnets, and then the clustering criteria include: the instances contained in the same subnet are divided into a class cluster.
As a third possible implementation, the instances in the same elastic telescoping (auto scalling, AC) group, load balancing cluster are typically the same or similar applications, and the criteria for clustering include: instances of the same elastic expansion group or load balancing cluster are divided into one class cluster.
As a fourth possible implementation manner, the multiple instances may be preset with an application identifier or a network prefix for distinguishing applications to which the instances belong, and the clustering criteria include: the instances carrying the same application identification or network prefix are divided into a class cluster.
The application identifier comprises an identifier used for representing the application attribute in the network card, an identifier used for dividing the application to which the container belongs by the container arranging engine and the like.
In a second aspect, a communication device is provided. The communication device comprises a clustering module, a matching module and a receiving-transmitting module. The clustering module is used for clustering a plurality of examples contained in at least two computing nodes to obtain at least two class clusters, and each class cluster in the at least two class clusters comprises at least one example. The matching module is used for determining a configuration list item forwarding relation according to flow interaction information between at least two class clusters, the configuration list item forwarding relation is used for indicating to send configuration list items of other class clusters with flow interaction with the class cluster of the first computing node to the first computing node, and the first computing node is one of the at least two computing nodes. The receiving and transmitting module is used for transmitting the configuration list items to at least two computing nodes according to the forwarding relation of the configuration list items.
As one possible implementation, configuring the table entry forwarding relationship includes: and issuing a configuration table item of an instance contained in a class cluster corresponding to the first computing node, wherein the corresponding class cluster comprises class clusters with traffic interaction with the class cluster to which the instance contained in the first computing node belongs.
As a possible implementation manner, the corresponding class cluster further includes a class cluster to which the instance included in the first computing node belongs.
As a possible implementation manner, the communication device further includes a traffic processing module, configured to determine traffic interaction information between at least two clusters according to the traffic interaction information between the multiple instances.
Optionally, the flow processing module is specifically configured to: acquiring flow interactive information among a plurality of examples; and determining that each two examples with traffic interaction respectively belong to the class clusters, and obtaining traffic interaction information between at least two class clusters.
As a possible implementation manner, the clustering module is specifically configured to: and clustering the multiple instances according to network configuration of the multiple instances contained in the at least two computing nodes to obtain at least two class clusters.
Optionally, the clustering module is specifically configured to: and clustering the instances belonging to the same security group, subnet, elastic expansion group or load balancing cluster in the multiple instances to obtain at least two class clusters.
Optionally, the clustering module is specifically configured to: and clustering the instances carrying the same network prefix or application identifier in the multiple instances to obtain at least two class clusters, wherein the application identifier comprises an identifier used for representing the application attribute in the network card and an identifier used for dividing the application to which the container belongs by the container arrangement engine.
The communication apparatus according to the fourth aspect may be a communication device, for example, a terminal device or a network device, or may be a chip (system) or other components or assemblies that may be disposed in the terminal device or the network device, or may be an apparatus including the terminal device or the network device, which is not limited in this aspect of the present application.
In a third aspect, there is provided a communications device comprising a memory for storing a set of computer instructions which, when executed by the processor, are operable to perform the operational steps of the communications method of any of the possible designs of the first aspect.
In a fourth aspect, a communication system is provided. The communication system includes a controller and at least two computing nodes, the at least two computing nodes being deployed with a plurality of instances. The controller is configured to perform the operational steps of the communication method in any one of the possible designs of the first aspect, and the at least two computing nodes are configured to receive configuration entries sent by the controller.
In addition, the technical effects of the communication apparatus according to the second aspect, the technical effects of the communication device according to the third aspect, and the technical effects of the communication system according to the fourth aspect may refer to the technical effects of the communication method according to the first aspect, and will not be described herein.
In a fifth aspect, a readable storage medium is provided. The readable storage medium includes: computer programs or instructions; the computer program or instructions, when run on a computer, cause the computer to perform the communication method according to any one of the possible implementations of the first aspect.
In a sixth aspect, a computer program product is provided. The computer program product comprises a computer program or instructions which, when run on a computer, cause the computer to perform the communication method according to any one of the possible implementations of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a configuration table entry issuing flow of a conventional cloud platform;
FIG. 2 is a flow chart of a full-volume issue method of configuration entries;
FIG. 3 is a flow chart of an on-demand delivery of configuration entries;
Fig. 4 is a schematic diagram of a communication system according to the present application;
Fig. 5 is a schematic diagram of a layered structure of a cloud platform according to the present application;
FIG. 6 is a schematic flow chart of a communication method according to the present application;
FIG. 7 is a schematic diagram of a clustering result provided by the present application;
FIG. 8 is a schematic diagram of a cluster-to-cluster traffic interaction information provided by the present application;
Fig. 9 is a schematic structural diagram of a communication system in a cloud native application scenario provided by the present application;
fig. 10 is a schematic structural diagram of a communication system in a conventional application scenario provided by the present application;
Fig. 11 is a schematic structural diagram of a communication device according to the present application;
Fig. 12 is a schematic structural diagram of a communication device according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a and b and c, wherein a, b and c may be single or plural.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The data transmission method provided by the embodiment of the application can be applied to a cloud platform management scene in the field of cloud computing. The following is a brief description of techniques that may be involved in the present application.
(1) Software defined network
The software defined network separates the control plane and the data forwarding plane of the network, so that the programmable control bottom hardware is realized through a software platform in the centralized controller, and flexible on-demand allocation of network resources is realized. In an SDN network, network equipment is only responsible for simple data forwarding and can adopt general hardware; the original operating system in charge of control is extracted into an independent network operating system in charge of adapting different service characteristics, and the communication between the network operating system and the service characteristics and between the network operating system and the hardware devices can be realized through programming.
(2) Cloud platform
Cloud platform (cloud computing platform), also referred to as a cloud computing platform, refers to services based on hardware resources and software resources, providing computing, networking, and storage capabilities, i.e., a platform provider combines cloud (remote hardware resources) and computing (remote software resources) together to form a platform that provides various services to users.
Services provided by the cloud platform may include software as a service (SaaS), platform as a service (platform AS A SERVICE, paaS), infrastructure as a service (IaaS), and the like. The SaaS is that a service provider uniformly deploys application software on a server of the cloud platform, and a user orders the application software service to the service provider through the Internet according to requirements. PaaS is to provide the development environment as a service, namely, a service provider provides the development environment, a server platform, hardware resources and other services to users, and the users develop application programs in the development environment provided by the cloud platform and share the application programs to other users through the cloud platform and the Internet. IaaS is a service provider that provides a cloud infrastructure consisting of multiple servers as a service to users, i.e., integrating memory, input/output devices, storage and computing resources into a virtual resource pool to provide storage resources and virtualization servers to users, etc.
In a cloud platform built based on SDN technology, a control plane of the cloud platform is equivalent to a human brain and is responsible for the dominant scheduling of the whole branch network. The cloud end realizes full-branch network including equipment online, configuration issuing, operation and maintenance monitoring and network optimization full-life-cycle management, unified management and unified operation and maintenance, and intelligent analysis and diagnosis are performed according to the running state of the full-network, so that service influence time is shortened, and service stability is improved.
(3) Configuration table entry
In the cloud platform, when a tenant creates a network, applies for resources, and changes the network, configuration needs to be issued to the data plane device through the control plane. For example, as shown in fig. 1, when a tenant chooses to implement service cloud deployment using a cloud platform, it first needs to operate in a user interface, create a virtual private cloud (virtual private cloud, VPC) through an application program interface (application programinterface, API), and create one or more subnets, and purchase or lease one or more Virtual Machines (VMs) or containers or other instances configured in the subnets, the controller 110 of the control plane needs to be located at different computing nodes (such as the computing node 120, the computing node 130 shown in fig. 1), and the user purchase or lease instance may be distributed to the different computing nodes. The multiple instances are configured in the same virtual private cloud through a virtualization technology, and configuration list items of all the instances in the virtual private cloud are stored in the controller. The controller needs to issue configuration table items of network cards corresponding to all the instances in the virtual private cloud to virtual switches of different computing nodes, so that the instances of the different computing nodes can communicate with each other based on the configuration table items.
Wherein the configuration table items comprise a location table, an access control table (access control lists, ACL), a security group, and the like, and after the configuration table items are issued to the data plane, the device of the data plane can communicate with one or more instances to perform the functions intended by the tenant.
In a cloud primary scenario, in order to meet the needs of a large number of clients, a large number of containers are typically deployed in each virtual private cloud, and as the client traffic progresses, the containers may reach tens of millions or even hundreds of millions. For such large-scale virtual private cloud requirements, the corresponding configuration list items to be issued are massive, the containers are elastically stretched along with the change of the service amount, and the effective speed of the related configuration list items directly influences the service of the client.
At present, some solutions for issuing configuration list items, such as full-volume issuing, on-demand issuing and the like, have the problems of long effective time of configuration, prolonged initial network packet and the like, so that the problem of lower efficiency of issuing the configuration list items exists.
The full-scale delivery of configuration entries is described below in conjunction with FIG. 2.
As shown in fig. 2, the controller 210 maintains all configuration entries required for the cloud network data plane, and when a user performs operations such as creating and configuration changing on the network of the virtual private cloud, the controller issues the configuration entries to the data plane (computing node 220, computing node 230, and computing node n are shown in fig. 2), that is, all relevant virtual switches. Wherein the compute node 220 is provided with a virtual switch 1 and runs instance 1. The computing node 230 is provided with a virtual switch 2, and runs instance 2 and instance 3. The computing node n is provided with a virtual switch n and runs an instance n-1 and an instance n.
User-created instance 1 is an instance deployed at computing node 220 for the first time in the corresponding virtual private cloud, and controller 210 is required to issue a full amount of configuration entries to computing node 220, so that virtual switch 1 can forward traffic sent by instance 1 to the destination instance, as in example 2. The controller 210 needs to issue a configuration entry corresponding to the instance 1 to the virtual switch 2 in the computing node 230, and the virtual switch 2 can forward the message sent by the instance 2 to the instance 1 in the computing node 220.
The full volume down step of the configuration table entry in fig. 2 may include the following steps 201-204.
In step 201, a user creates an instance 1 through a user interface, and the controller 210 executes creation of a network card in the virtual private cloud and binds the network card to the instance 1.
Step 202, the controller 210 calculates configuration entries that need to be issued to virtual switches of different computing nodes of the data plane by the current request.
In step 203, the controller 210 issues all configuration entries in the vpn to the virtual switches of the computing nodes 220, and issues the configuration entries of the instance 1 to the virtual switches of other relevant computing nodes.
In step 204, after the configuration entries in the virtual switch of other relevant computing nodes take effect, the traffic interaction between the instance 1 and other instances (e.g., instance 2, instance n, etc.) is implemented.
In step 203, all configuration entries are issued to the virtual switch of the computing node 220, and when the number of configuration entries is large, the effective time reaches tens of seconds or even minutes, so that the entries corresponding to the instance to be accessed by the instance 1 take a relatively late effect, and the access of the instance 1 and the instance can be successful only by a long time, thereby affecting the user experience.
In a cloud platform created based on an SDN technology, the main reason that the effective time of configuration items is long is that whether the configuration items are cold start scenes or not cannot be distinguished for high-order services for arranging the virtual private cloud, namely, the configuration items are issued to a computing node for the first time, so that the configuration items of the virtual private cloud are issued to the computing node in a full quantity, and the arrangement needs clear waiting effective time, so that the overall communication efficiency is low.
The on-demand manner of issuing configuration entries is described next in connection with fig. 3.
As shown in fig. 3, the controller 310 maintains all configuration entries required by the cloud network data plane (computing nodes 320 and 330 are shown in fig. 3), and when a user performs operations such as creating and configuring changes on the network of the virtual private cloud through the user interface, the controller 310 issues the configuration entries to the forwarding node 340, and the forwarding node 340 records all configuration entries. The functionality of forwarding node 340 to issue configuration entries as needed here may be implemented by a control plane or a data plane. Wherein the compute node 320 is provided with virtual switch 1 and runs instance 1. The compute node 330 is provided with virtual switch 2 and runs instance 2.
When accessing between the instances, the virtual switch matches the default table entry and sends the default table entry to the forwarding node 340, the forwarding node 340 agents forward the traffic to the destination instance, and the corresponding configuration table entry is issued to the virtual switch. The inter-instance subsequent access traffic is forwarded directly by the virtual switch of the compute node without going through the forwarding node 340.
The on-demand down step of the configuration entry in fig. 3 may include the following steps 301-305.
In step 301, a user creates an instance 1 through a user interface, and the controller 310 executes creation of a network card in the virtual private cloud and binds the network card to the instance 1.
Step 302, the controller 310 sends the configuration table entry of example 1 to the forwarding node 340, and all configuration table entries are recorded in the forwarding node 340.
Step 303, when the instance 1 accesses the instance 2 for the first time, the virtual switch of the computing node 320 matches the default entry, and sends the traffic to the forwarding node 340.
Step 304, the forwarding node 340 sends the traffic to the instance 2 after matching the configuration table entry according to the traffic, and sends the instance 2 configuration table entry to the virtual switch of the computing node 320.
Step 305, when instance 1 accesses instance 2 again, computing node 320 matches the configuration entry of instance 2 on the virtual switch, sending traffic directly to instance 2 of computing node 330.
In the above steps 303-304, the traffic of the access example 2 of the example 1 is not directly from the computing node 1 to the computing node 2, and the forwarding node needs to be bypassed, which results in a larger first packet delay and affects the overall communication efficiency.
The application provides a communication method, in particular to a communication method for issuing configuration list items of instances requiring traffic interaction of self-deployed instances to a computing node. First, the controller clusters a plurality of instances contained in at least two computing nodes to obtain at least two class clusters, so that at least one instance contained in each class cluster belongs to the same or similar application. And secondly, the controller determines a configuration list item forwarding relation according to the flow interaction information between at least two class clusters, wherein the configuration list item forwarding relation is used for indicating to send configuration list items of other class clusters with flow interaction with the class cluster of the first computing node to the first computing node, so that the configuration list item forwarding relation is associated with the flow interaction between the examples. And then, the controller sends the configuration table items to at least two computing nodes according to the forwarding relation of the configuration table items.
Based on the communication method, through flow interaction among the class clusters to which each instance belongs, each class cluster needing to be accessed is determined, and for each class cluster, a configuration table item is issued to a computing node to which the class cluster needing to be communicated with the class cluster belongs, so that the instances contained in each class cluster can be accessed mutually. In this way, compared with the situation that the total distribution of the configuration list items needs to distribute a large amount of configuration list items to a certain computing node for configuration in a scene of large virtual private cloud scale, the method does not need to distribute the configuration list items of all instances when the certain computing node is cold started. In the application, the configuration list item is not required to be indirectly issued through the forwarding node, and the configuration list item is required to be learned through the forwarding node which records all the configuration list items relative to the on-demand issuing instance of the configuration list item. Therefore, compared with the full-volume issuing and the on-demand issuing modes of the configuration list items, the communication method reduces network communication time delay caused by long time consumption of forwarding the configuration list items or issuing the configuration list items between the instances, and improves the overall communication efficiency.
The following describes in detail the implementation of the embodiment of the present application with reference to the drawings.
Fig. 4 is a schematic diagram of a communication system according to the present application. Communication system 400 may be a cloud platform based on an SDN technology architecture, communication system 400 including a controller 410 and a cluster of computing nodes. Wherein the cluster of computing nodes is in communication with the controller 410 via a network.
Controller 410 may be an SDN controller such as an opendayleight controller, an open network operating system (open network operating system, ONOS) controller, or the like.
The controller 410 is configured to allocate resources and issue configuration for the tenant according to the tenant requirement, thereby building a cloud platform. For example, the controller 410 is configured to perform device on-line, configuration delivery, operation and maintenance monitoring, and network optimization full life cycle management of devices included in the computing node cluster in the network, so as to implement unified management and unified operation and maintenance. Wherein the user demand may be that the user performs a corresponding operation through the user interface to input the controller 410.
As a possible implementation manner, the controller 410 clusters a plurality of instances included in at least two computing nodes to obtain at least two class clusters, then determines a configuration table entry forwarding relationship according to flow interaction information between the at least two class clusters, where the configuration table entry forwarding relationship is used to instruct to issue, to the first computing node, configuration table entries of other class clusters having flow interaction with the class cluster of the first computing node, and then sends the configuration table entries to the at least two computing nodes according to the configuration table entry forwarding relationship.
The computing node cluster includes one or more computing nodes (four computing nodes, namely computing node 420, computing node 430, computing node 440, and computing node 450 are shown in fig. 4).
Each computing node contains one or more instances for implementing the functionality required by the cloud platform tenant. The instance may be an executable unit of an application such as a container.
As one possible implementation, each computing node includes one or more virtual machines, and virtual switches for implementing instance communications between the computing nodes, the virtual switches of each computing node being interconnected (connection lines not shown in fig. 4) and respectively connected to the controller 410.
Optionally, each virtual machine runs one or more containers to implement the functionality of the instance. Each container communicates with the virtual switch through a network card (represented by circles in fig. 4).
The computing nodes in the computing node cluster are configured to receive the configuration table entry issued by the controller 410, and deploy the configuration table entry to the virtual switch.
It should be noted that fig. 4 is only a schematic diagram, and should not be construed as limiting the present application, and other devices may be included in the communication system 400, which are not shown in fig. 4.
The computing nodes in the communication system 400 provided by the present application may be nodes in a cloud platform. The communication system 400 implements the functionality of a compute node based on an infrastructure as a service (IaaS) AS A SERVICE, a platform as a service (PaaS) AS A SERVICE, and a software as a service (software AS ASERVICE, saaS), and provides services (e.g., computing services, etc.) to tenants through the compute node. The computing node may be a service node of the cloud that is virtualized using resources of the communication system 400 (e.g., computing resources and storage resources, etc.).
The layered structure of communication system 400 as a cloud platform is described next in connection with fig. 5.
As shown in fig. 5, iaaS platform 510 is configured to perform a virtualization of all infrastructure resources in communication system 400 to provide virtual resources (e.g., computing resources, network resources, and storage resources) to users in a software-defined manner.
PaaS platform 520 is used to implement the runtime environment and application support functions of communication system 400 so that users can apply for computing units within the quota to run their services instead of virtual resources. Alternatively, the computing unit may be an instance, such as a container, and the communication system 400 deploys and runs the user's code by scheduling the container. It should be noted that the number of containers in PaaS platform 520 may be one or more, only one container being taken as an example in fig. 5.
As one possible implementation, communication system 400 may inject one or more components into a container to enable deployment and execution of code.
The SaaS application 530 is configured to provide a service to a user by composing an application program deployed by the user in an application program interface response manner based on the IaaS platform 510 and the PaaS platform 520, and the application program and the container of the SaaS application 530 may communicate through a web server (web server).
It should be noted that fig. 5 is only a schematic diagram, and should not be construed as limiting the present application, and other modules may be further included in the cloud platform hierarchy of the communication system 400, which is not shown in fig. 5.
The communication method provided by the application is specifically described below with reference to the accompanying drawings.
The steps of the communication method provided by the present application are performed by the controller 410 and the computing nodes in the communication system 400, and steps 610-630 of the communication method provided by the present application are described next in connection with fig. 6.
In step 610, the controller 410 clusters a plurality of instances included in at least two computing nodes to obtain at least two class clusters.
In some possible embodiments, the controller 410 clusters the plurality of instances according to a network configuration of the plurality of instances contained by the at least two computing nodes to obtain at least two class clusters.
Wherein a class cluster may be regarded as an application, i.e. a plurality of instances belonging to one class cluster are a set of instances comprised by one application, the result of the clustering is to divide the plurality of instances comprised by at least two computing nodes into different applications.
For example, as shown in FIG. 7, computing node 420 contains instances 2 and 3, computing node 430 contains instances 1,4, and 7, computing node 440 contains instances 8 and 9, and computing node 450 contains instances 5 and 6. Application 1 consists of instance 2 and instance 3 of compute node 420, application 2 consists of instance 1, instance 4 and instance 7 of compute node 430 and instance 8 of compute node 440, and application 3 consists of instance 5, instance 6 of compute node 450 and instance 9 of compute node 440.
Optionally, the controller 410 clusters the plurality of instances according to a security group, a subnet, a resiliency telescoping group, or a load balancing cluster to which the plurality of instances belong.
For example, instances belonging to one security group, subnet, elastic telescoping group, or load balancing cluster are typically the same or similar applications. The controller 410 divides the instances belonging to the same security group, subnet, elastic expansion group or load balancing cluster into the same class cluster, and the specific steps are described with reference to fig. 10, and are not repeated here.
The load balancing clusters may be application load balancing (application load balancer, ALB) clusters, legacy load balancing (classic load balancer, CLB) clusters, network load balancing (network load balancer, NLB) clusters, and the like, among others.
Optionally, the controller 410 clusters the plurality of instances according to network prefixes or application identifications carried by the plurality of instances.
For example, in the context of access control based on tags (tags), different tags are typically used to distinguish between different access rights, and instances having the same access rights are typically the same or similar applications. The controller 410 divides instances of multiple instances that carry the same access control tag into the same class cluster.
As another example, when address assignment and categorization is performed using no category inter-domain routing (CIDR) using internet protocol version 6 (internet protocol version, ipv 6), instances belonging to the same CIDR address block typically belong to the same or similar applications. The controller 410 divides instances of the network prefix carrying the same classless inter-domain route among multiple instances into the same class cluster.
For another example, in a scenario where an application identifier is recorded in an attribute of a network card, the application identifier in the network card is used to represent an application attribute of an application to which an instance belongs, and the instance with the same application identifier is usually the same or a similar application. The controller 410 divides instances of multiple instances that carry application identifications representing the same application attribute into the same class cluster.
Also, as in the scenario where instances are hosted using a container orchestration engine such as Kubernetes, the Kubernetes uses application identification to indicate which pod make up an application, with instances having the same application identification typically being the same or similar applications. The controller 410 divides the instances carrying the application identifier representing the same Kubernetes into the same class cluster, and specific steps thereof refer to the related description of fig. 9, which is not repeated herein.
The application is not limited to the particular manner in which the controller 410 clusters the instances, which is just an example of the possible embodiments listed herein, and in other possible embodiments, additional clustering schemes may be employed. For example, the controller 410 may also divide the traffic-like instances into a cluster by collecting traffic information and machine learning, and the clustering is not exhaustive.
In step 610, it is determined by clustering operation which instances are included in each application, and then the set of network card information (such as information of internet protocol (internet protocol, IP), media access control address (MEDIA ACCESS control, MAC), virtual private cloud, etc.) corresponding to each application instance, and then in step 620, the controller 410 may determine which network cards have traffic, that is, traffic interaction information between multiple instances, by analyzing traffic information between instances, so as to determine which two applications have traffic interaction, that is, traffic interaction information between at least two clusters, according to the traffic interaction information between multiple instances, so as to determine a configuration table forwarding relationship.
Step 620, the controller 410 determines a configuration table forwarding relationship according to the traffic interaction information between at least two class clusters.
In some possible embodiments, the controller 410 determines traffic interaction information between at least two clusters according to the traffic interaction information between the plurality of instances, and determines a configuration entry forwarding relationship according to the traffic interaction information between at least two clusters.
The configuration list item forwarding relation is used for indicating the computing node to issue the configuration list item to the instance contained in the corresponding class cluster, wherein the corresponding class cluster is the class cluster with traffic interaction with the class cluster to which the instance contained in the computing node belongs.
Taking the clustering result shown in fig. 7 as an example, please refer to fig. 8, the controller 410 determines that the traffic flows exist in the case 2 and case 3 in the application 1 and the case 1 in the application 2 (represented by double-arrow connection lines), and the traffic flows exist in the case 4 and the case 5 and case 6 in the application 2 and the case 3, and determines that the traffic flows exist in the application 1 and the application 2 are interacted (represented by double-arrow connection lines), and the traffic flows in the application 2 and the application 3 are interacted.
As a possible implementation manner, the controller 410 first obtains flow interaction information between multiple instances, and determines that each two instances where flow interaction exists respectively belong to a cluster where flow interaction exists, so as to obtain flow interaction information between at least two clusters.
Alternatively, the controller 410 may determine traffic interaction information between multiple instances through a flow table (e.g., br-int flow table, br-tun flow table, etc.) of the virtual switch of each computing node.
Optionally, the controller 410 may acquire information (such as triplets, quintuples, etc.) of the service traffic through connection tracking entries of each computing node, so as to determine traffic information between network cards of multiple instances, that is, traffic interaction information between multiple instances.
The present application is not limited to the manner in which the controller 410 collects the traffic interaction information between multiple instances, the manner in which the traffic interaction information of an instance is determined based on the flow table and the connection tracking table entry is merely an example of the present application, and the controller 410 may determine the traffic interaction information of an instance according to the traffic related data such as the flow log.
As a possible implementation manner, the controller 410 determines a configuration table entry forwarding relationship according to traffic interaction information between at least two class clusters, so as to issue, to at least two computing nodes in the computing node cluster, a configuration table entry of an instance that each computing node needs to access itself according to the configuration table entry forwarding relationship.
Optionally, the configuration table entry forwarding relationship is used to indicate to issue to the first computing node configuration table entries of other class clusters having traffic interactions with the class cluster of the first computing node. Wherein the first computing node is one of at least two computing nodes in a cluster of computing nodes (e.g., computing node 420, computing node 430, computing node 440, or computing node 450).
Optionally, configuring the table entry forwarding relationship includes: and issuing a configuration table item of an instance contained in a class cluster corresponding to the first computing node, wherein the corresponding class cluster comprises class clusters with traffic interaction with the class cluster to which the instance contained in the first computing node belongs.
Optionally, the corresponding class cluster further includes a class cluster to which the instance included in the first computing node belongs.
Continuing taking the traffic interaction between applications in fig. 8 as an example, according to the traffic interaction information of application 1, application 2, and application 3, and the applications and the computing nodes to which each of examples 1-9 belongs, the partitioning result of the configuration table entry forwarding relationship may be as follows:
Illustratively, the computing node 420 includes an instance 2 and an instance 3, where the application 1 to which the instance 2 and the instance 3 belong has traffic interaction with the application 2, and the class cluster corresponding to the computing node 420 is the application 2. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained in application 2, i.e., configuration entries for instance 1, instance 4, instance 7, and instance 8, to the compute node 420.
Illustratively, the computing node 430 includes an instance 1, an instance 4, and an instance 7, and an application 2 to which the instance 1, the instance 4, and the instance 7 belong has traffic interactions with the application 1 and the application 3, respectively, and the class clusters corresponding to the computing node 430 are the application 1 and the application 3. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained by application 1 and application 2, i.e., configuration entries for instance 2, instance 3, instance 5, instance 6, and instance 9, to the compute node 430.
Illustratively, the computing node 440 includes an instance 8 and an instance 9, where the application 2 to which the instance 8 belongs has traffic interaction with the application 1 and the application 3, and the application 3 to which the instance 9 belongs has traffic interaction with the application 2, and the class clusters corresponding to the computing node 440 are the application 1, the application 2 and the application 3. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained by application 1, application 2, and application 3, i.e., configuration entries for instance 1-instance 9, to the compute node 440.
Illustratively, the computing node 450 includes an instance 5 and an instance 6, where the application 3 to which the instance 5 and the instance 6 belong has traffic interaction with the application 2, and the class cluster corresponding to the computing node 450 is the application 2. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained in application 2, i.e., configuration entries for instance 1, instance 4, instance 7, and instance 8, to the compute node 450.
Illustratively, configuring the table entry forwarding relationship may further include: the controller 410 issues to the first computing node a configuration entry for an instance contained by the first computing node. For example, controller 410 issues configuration entries for instance 2 and instance 3 to compute node 420, and controller 410 issues configuration entries for instance 1, instance 4, instance 7, and instance 8 to compute node 430.
In step 620, the controller 410 determines the configuration table to be issued according to the real flow between the instances, where the configuration table of each instance is determined by the controller 410 according to the configuration information of the network card corresponding to each instance stored in the controller.
Step 630, the controller 410 sends configuration entries to at least two computing nodes according to the configuration entry forwarding relationship.
In some possible embodiments, the controller 410 issues configuration entries of instances included in class clusters corresponding to at least two computing nodes in the computing node cluster to the at least two computing nodes according to a configuration entry forwarding relationship.
Optionally, the controller 410 issues the configuration entry to the computing node, which means that the controller 410 issues the configuration entry to a virtual switch of the computing node, so that the computing node can communicate with an instance corresponding to the configuration entry through the virtual switch based on the configuration entry.
Based on the above-described steps 610-630 of the communication method, the controller 410 determines configuration entries for instances that need to be issued to each computing node based on traffic interaction information between different classes of clusters. In this way, the controller 410 only needs to issue configuration items of the instances that the computing node needs to communicate to each computing node, and compared with the total issue mode of the configuration items, the issue total amount of the configuration items is reduced, and it is not necessary to wait for the configuration items in the virtual switch of the computing node receiving the configuration items of all the instances in the cloud platform to implement flow intercommunication between the instances after the configuration items take effect, so that network delay of communication between the instances of the cloud platform is reduced. Meanwhile, the configuration table entry sent by the controller 410 to the computing node includes the configuration table entries of all the examples included in the other class clusters needing to be communicated in the class cluster to which the example of the computing node belongs, so that the deployment of the configuration table entry is completed before the partial example of the computing node interacts with the partial examples of the other class clusters, and compared with the on-demand issuing mode of the configuration table entry, the access flow among the examples does not need to be forwarded by a forwarding node, thereby avoiding the first packet delay, reducing the network delay of the communication among the examples of the cloud platform and improving the overall communication efficiency among the examples.
The communication method of the present application is integrally described above with reference to fig. 6-8, and a specific clustering step and a specific configuration table entry issuing manner of the communication method in a cloud native application scenario are described below with reference to fig. 9.
As shown in fig. 9, taking the cloud native application scenario as an example of a cluster hosting type container product, highly scalable, high-performance enterprise-level Kubernetes clusters are provided by cloud vendors, supporting running containers (also referred to as instances) and easily deploying, managing and expanding containerized applications.
The communication system of the cloud native application scenario includes a VPC controller 910, a Kubernetes controller 920, and a computing node cluster including one or more computing nodes (computing nodes 930, 940, 950, and 960 are shown in fig. 9). The VPC controller 910 is connected to a Kubernetes controller 920, and the Kubernetes controller 920 is connected to a computing node cluster comprising one or more computing nodes (represented by non-arrowed connection lines in fig. 9).
The Kubernetes controller 920 is configured to issue an instance through an API provided by the cloud platform, create a container for an application in the instance, call the VPC controller 910 to create a network resource, and mount a network card or an auxiliary network card in the container.
The application is implemented by a container (also referred to as pod) as shown in fig. 9, which is deployed in a virtual machine of a compute node. For example, as shown in FIG. 9, computing node 930 includes virtual machine 1 and virtual machine 2, computing node 940 includes virtual machine 3 and virtual machine 4, computing node 950 includes virtual machine 5, and computing node 960 includes virtual machine 6. Virtual machine 1is deployed with container 1 and container 2, virtual machine 2 is deployed with container 3, virtual machine 3 is deployed with container 4 and container 5, virtual machine 4 is deployed with container 6, virtual machine 5 is deployed with container 7 and container 8, and virtual machine 6 is deployed with container 9 and container 10.
An application consists of a set of containers of the same service. In fig. 9, container 1, container 6 and container 8 belong to application 1, container 2 and container 4 belong to application 2, container 3, container 5 and container 7 belong to application 3, container 9 belongs to application 4, and container 10 belongs to application 5.
The clustering of containers is illustrated below.
As a first possible implementation, kubernetes controller 920 may populate the application identifier in the network card attribute when creating a network card or an auxiliary network card. For example, consider application 1, which consists of container 1, container 6, and container 8, as a blue application, then the blue application identification application_id is added to the binding profile attribute. The VPC controller 910 determines the application 1 to which the container belongs by querying the application identifier in the network card attribute of the network card corresponding to the container.
As a second possible implementation, kubernetes controller 920 stores mapping information of applications with a network card or an auxiliary network card. For example, to a remote dictionary service (Remote Dictionary Server, redis) or Etcd layer medium. The VPC controller 910 determines an application to which the container belongs by querying mapping information of a network card corresponding to the container.
As a third possible implementation, kubernetes controller 920 manages the containers of the computing nodes through a Kubernetes proxy (agent) for each computing node that configures the application identification at the network card or auxiliary network card to which the virtual switch is connected. The VPC controller 910 determines the application to which the container belongs by querying the application identifier in the network card connected to the virtual switch corresponding to the container.
The following describes the configuration entry issuing method specifically.
As can be seen from the communication relationship between the containers shown in fig. 9, when the application elastic expansion generating container a to which the container 1, the container 6 and the container 8 belong is that there is traffic flow between the application 1 to which the container 1, the container 6 and the container 8 belong and the application 3 to which the container 3, the container 5 and the container 7 belong (the traffic flow between the containers is shown by the double-headed arrow connection dashed line in fig. 9), the VPC controller 910 needs to issue configuration entries of the container 3, the container 5 and the container 7 to the computing node 950, and issue configuration entries of the container 8 to the computing node 930 and the computing node 940.
After the communication method in the cloud native application scenario is described above with reference to fig. 9, a specific clustering step and a specific configuration table entry issuing manner of the communication method in the traditional application scenario are described next with reference to fig. 10.
As shown in fig. 10, the client purchases or rents VPC, virtual machine, load Balancing (LB) load, etc. through the cloud vendor web page, and the client deploys its own application in the virtual machine.
The communication system of the conventional application scenario comprises a VPC controller 1010 and a computing node cluster comprising one or more computing nodes (computing nodes 1020, 1030, 1040 and 1050 are shown in fig. 10). Wherein the VPC controller 1010 is respectively connected to each computing node in the cluster of computing nodes (indicated by the non-arrowed connection lines in fig. 10).
For example, as shown in fig. 10, computing node 1020 includes virtual machine 1, virtual machine 2, and virtual machine 3, computing node 1030 includes virtual machine 4, virtual machine 5, and virtual machine 6, computing node 1040 includes virtual machine 7 and virtual machine 8, and computing node 1050 includes virtual machine 9.
The clustering of containers is illustrated below.
After purchasing or renting a virtual machine, a client typically places the same application or similar applications into the same security group, i.e., the security access policies of the same application are the same, or the IP of the same application is placed into one address group, and security group rules between the two address groups are configured. For example, virtual machine 1, virtual machine 6, and virtual machine 8 have the same security group configuration 1, virtual machine 2, and virtual machine 4 have the same security group configuration 2, virtual machine 3, virtual machine 5, and virtual machine 7 have the same security group configuration 3, and virtual machine 9 has one security group configuration 4.
The VPC controller 1010 performs clustering according to security group configuration, for example, the VPC controller 1010 determines that the virtual machine 1, the virtual machine 6 and the virtual machine 8 have the same security group configuration 1, the virtual machine 2 and the virtual machine 4 have the same security group configuration 2, the virtual machine 3, the virtual machine 5 and the virtual machine 7 have the same security group configuration 3, the virtual machine 9 has one security group configuration 4, and then clusters the virtual machine 1, the virtual machine 6 and the virtual machine 8 as an application 1, clusters the virtual machine 2 and the virtual machine 4 as an application 2, clusters the virtual machine 3, the virtual machine 5 and the virtual machine 7 as an application 3, and clusters the virtual machine 9 as an application 4 according to different security group configurations.
The following describes the configuration entry issuing method specifically.
When the application 1 needs to add the virtual machine a, it is known that, according to the communication relationship between the containers shown in fig. 10 (indicated by the double-arrow dashed connection line in fig. 10), there is traffic interaction between the application 1 and the application 3, traffic interaction between the application 2 and the application 4, and traffic interaction between the application 2 and the application 3. Accordingly, VPC controller 1010 needs to issue configuration entries for virtual machine 4, virtual machine 6, virtual machine 7, and virtual machine 8 to computing node 1020, issue configuration entries for virtual machine 2, virtual machine 3, virtual machine 8, and virtual machine 9 to computing node 1030, issue configuration entries for virtual machine 2, virtual machine 3, and virtual machine 6 to computing node 1040, and issue configuration entries for virtual machine 4 to computing node 1050.
The communication method provided by the present application is described in detail above in connection with fig. 6-10. A communication apparatus for performing the communication method provided by the present application is described in detail below with reference to fig. 11.
Fig. 11 is a schematic structural diagram of a communication device according to the present application. The communication device can be used for realizing the functions of the corresponding communication equipment in the method embodiment, so that the communication device also has the beneficial effects of the method embodiment. In this embodiment, the communication device may be the controller 410 shown in fig. 4, or may be a module (such as a chip) applied to a server.
As shown in fig. 11, the communication device 1100 includes a clustering module 1110, a matching module 1120, and a transceiving module 1130.
In some possible embodiments, the communication device 1100 may be configured to implement the functions of the controller 410 in the method embodiment shown in fig. 6, where each module included in the communication device 1100 is specifically configured to implement the functions described below.
The clustering module 1110 is configured to cluster a plurality of instances included in at least two computing nodes to obtain at least two class clusters, where each class cluster in the at least two class clusters includes at least one instance. For example, the clustering module 1110 is configured to perform step 610 as shown in fig. 6.
The matching module 1120 is configured to determine a configuration table entry forwarding relationship according to traffic interaction information between at least two class clusters, where the configuration table entry forwarding relationship is configured to instruct to issue, to a first computing node, a configuration table entry of another class cluster having traffic interaction with the class cluster of the first computing node, and the first computing node is one of the at least two computing nodes. For example, the matching module 1120 is configured to perform step 620 as shown in fig. 6.
The transceiver module 1130 is configured to send the configuration table entry to at least two computing nodes according to the configuration table entry forwarding relationship. For example, transceiver module 1130 is configured to perform step 630 as shown in fig. 6.
As one possible implementation, configuring the table entry forwarding relationship includes: and issuing a configuration table item of an instance contained in a class cluster corresponding to the first computing node, wherein the corresponding class cluster comprises class clusters with traffic interaction with the class cluster to which the instance contained in the first computing node belongs.
As a possible implementation manner, the corresponding class cluster further includes a class cluster to which the instance included in the first computing node belongs.
As a possible implementation manner, the communication device further includes a traffic processing module, configured to determine traffic interaction information between at least two clusters according to the traffic interaction information between the multiple instances.
Optionally, the flow processing module is specifically configured to: acquiring flow interactive information among a plurality of examples; and determining that each two examples with traffic interaction respectively belong to the class clusters, and obtaining traffic interaction information between at least two class clusters.
As one possible implementation, the clustering module 1110 is specifically configured to: and clustering the multiple instances according to network configuration of the multiple instances contained in the at least two computing nodes to obtain at least two class clusters.
Optionally, the clustering module 1110 is specifically configured to: and clustering the instances belonging to the same security group, subnet, elastic expansion group or load balancing cluster in the multiple instances to obtain at least two class clusters.
Optionally, the clustering module 1110 is specifically configured to: and clustering the instances carrying the same network prefix or application identifier in the multiple instances to obtain at least two class clusters, wherein the application identifier comprises an identifier used for representing the application attribute in the network card and an identifier used for dividing the application to which the container belongs by the container arrangement engine.
It should be appreciated that the communication device 1100 according to the embodiments of the present application may be implemented by a central processing unit (central processing unit, CPU), an ASIC, or a programmable logic device (programmable logic device, PLD), which may be a complex program logic device (complex programmable logical device, CPLD), an FPGA, a generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof. When the communication apparatus 1100 implements the communication method shown in fig. 6 by software, the communication apparatus 1100 and its respective modules may be software modules.
It should be understood that the controller 410 and the like in the embodiments of the present application may correspond to the communication device 1100 in the application embodiments, and may correspond to a respective main body performing the method according to the embodiments of the present application, and the foregoing and other operations and/or functions of each module in the communication device 1100 are respectively for implementing the respective flows of the method in fig. 6, and are not repeated herein for brevity.
The present application also provides a communication device 1200 comprising a memory 1201, a processor 1202, a communication interface 1203 and a bus 1204. Wherein the memory 1201, the processor 1202 and the communication interface 1203 are communicatively coupled to each other via a bus 1204. The communication device 1200 may be the controller 410 of fig. 6, etc.
Memory 1201 may be a read only memory, a static storage device, a dynamic storage device, or a random access memory. The memory 1201 may store computer instructions and data sets required to execute the computer instructions, the processor 1202 and the communication interface 1203 being configured to perform any of the steps of the communication method shown in fig. 6 when the computer instructions stored in the memory 1201 are executed by the processor 1202.
The processor 1202 may employ a general-purpose central processor, application Specific Integrated Circuit (ASIC), graphics processor (graphics processing unit, GPU), or any combination thereof. The processor 1202 may include one or more chips. The processor 1202 may include an AI accelerator, such as a neural network processor (neural processing unit, NPU). In addition, in fig. 12, a case where each communication device 1200 includes one processor 1202 is taken as an example, in a specific implementation, the number and types of the processors 1202 in each communication device 1200 may be set according to service requirements, one or more processors may be included in the same communication device 1200, and when multiple processors are included in the same communication device 1200, the present application is not limited to the types of the processors.
The communication interface 1203 uses a transceiver module, such as, but not limited to, a transceiver, to enable communication between the communication device 1200 and other devices or communication networks. For example, a management message, an isolation request, etc. may be received or sent through the communication interface 1203.
The bus 1204 may include a path for transferring information between various components of the communication device 1200 (e.g., the memory 1201, the processor 1202, the communication interface 1203).
Communication paths may be established between each of the communication devices 1200 described above through a communication network as shown in fig. 12. Any of the communication devices 1200 may be a computer in a distributed storage system (e.g., a server), or a computer in an edge data center, or a terminal communication device.
Each communication device 1200 may have disposed thereon the functions of the communication apparatus 1100, for example, performing any of the steps shown in fig. 6, or performing the functions of the modules in the communication apparatus 1100.
The method steps in this embodiment may be implemented by hardware, or may be implemented by executing software instructions by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable ROM (erasable PROM, EPROM), electrically Erasable Programmable ROM (EEPROM), registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a terminal device. The processor and the storage medium may reside as discrete components in a network device or terminal device.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as digital video discs (digital video disc, DVD); but also semiconductor media such as Solid State Drives (SSDs) STATE DRIVE. While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (12)

Translated fromChinese
1.一种通信方法,其特征在于,包括:1. A communication method, comprising:将至少两个计算节点包含的多个实例进行聚类,得到至少两个类簇,所述至少两个类簇中每个类簇包括至少一个实例;Clustering multiple instances included in at least two computing nodes to obtain at least two clusters, each of the at least two clusters including at least one instance;根据所述至少两个类簇间的流量交互信息确定配置表项转发关系,所述配置表项转发关系用于指示向第一计算节点下发与所述第一计算节点的类簇存在流量交互的其他类簇的配置表项,所述第一计算节点为所述至少两个计算节点中的一个;Determine a configuration table item forwarding relationship according to the traffic interaction information between the at least two clusters, the configuration table item forwarding relationship being used to indicate that configuration table items of other clusters having traffic interaction with the cluster of the first computing node are sent to the first computing node, the first computing node being one of the at least two computing nodes;根据所述配置表项转发关系向所述至少两个计算节点发送配置表项。The configuration table entry is sent to the at least two computing nodes according to the configuration table entry forwarding relationship.2.根据权利要求1所述的方法,其特征在于,所述配置表项转发关系包括:向所述第一计算节点下发所述第一计算节点对应的类簇包含的实例的配置表项,所述对应的类簇包括与所述第一计算节点包含的实例所属的类簇存在流量交互的类簇。2. The method according to claim 1 is characterized in that the configuration table item forwarding relationship includes: sending the configuration table items of the instances contained in the cluster corresponding to the first computing node to the first computing node, and the corresponding cluster includes a cluster that has traffic interaction with the cluster to which the instances contained in the first computing node belong.3.根据权利要求2所述的方法,其特征在于,所述对应的类簇还包括所述第一计算节点包含的实例所属的类簇。3. The method according to claim 2 is characterized in that the corresponding cluster also includes the cluster to which the instance contained in the first computing node belongs.4.根据权利要求1-3中任一项所述的方法,其特征在于,所述方法还包括:4. The method according to any one of claims 1 to 3, characterized in that the method further comprises:根据所述多个实例间的流量交互信息确定所述至少两个类簇间的流量交互信息。The traffic interaction information between the at least two clusters is determined according to the traffic interaction information between the multiple instances.5.根据权利要求4所述的方法,其特征在于,所述根据所述多个实例间的流量交互信息确定所述至少两个类簇间的流量交互信息,包括:5. The method according to claim 4, characterized in that the determining the traffic interaction information between the at least two clusters according to the traffic interaction information between the multiple instances comprises:获取所述多个实例间的流量交互信息;Obtaining traffic interaction information between the multiple instances;确定存在流量交互的每两个实例分别所属的类簇存在流量交互,得到所述至少两个类簇间的流量交互信息。It is determined that there is traffic interaction between the clusters to which each two instances having traffic interaction belong, and traffic interaction information between the at least two clusters is obtained.6.根据权利要求1-5中任一项所述的方法,其特征在于,所述将至少两个计算节点包含的多个实例进行聚类,得到至少两个类簇,包括:6. The method according to any one of claims 1 to 5, characterized in that clustering the multiple instances included in the at least two computing nodes to obtain at least two clusters comprises:根据所述至少两个计算节点包含的多个实例的网络配置对所述多个实例进行聚类,得到所述至少两个类簇。The multiple instances are clustered according to network configurations of the multiple instances included in the at least two computing nodes to obtain the at least two clusters.7.根据权利要求6所述的方法,其特征在于,所述根据所述至少两个计算节点包含的多个实例的网络配置对所述多个实例进行聚类,得到所述至少两个类簇,包括:7. The method according to claim 6, wherein clustering the multiple instances according to the network configurations of the multiple instances included in the at least two computing nodes to obtain the at least two clusters comprises:根据所述多个实例所属的安全组、子网、弹性伸缩组或负载均衡集群对所述多个实例进行聚类,得到所述至少两个类簇。The multiple instances are clustered according to the security groups, subnets, elastic scaling groups, or load balancing clusters to which the multiple instances belong, to obtain the at least two clusters.8.根据权利要求6所述的方法,其特征在于,所述根据所述至少两个计算节点包含的多个实例的网络配置对所述多个实例进行聚类,得到所述至少两个类簇,包括:8. The method according to claim 6, wherein clustering the multiple instances according to the network configurations of the multiple instances included in the at least two computing nodes to obtain the at least two clusters comprises:根据所述多个实例中携带的网络前缀或应用标识对所述多个实例进行聚类,得到所述至少两个类簇,所述应用标识包括网卡内用于表示应用属性的标识、容器编排引擎用于划分容器所属应用的标识。The multiple instances are clustered according to network prefixes or application identifiers carried in the multiple instances to obtain the at least two clusters, where the application identifier includes an identifier in the network card for representing application attributes and an identifier used by the container orchestration engine to divide the application to which the container belongs.9.一种通信装置,其特征在于,所述通信装置用于执行如权利要求1-8中任一项所述的方法的操作步骤。9. A communication device, characterized in that the communication device is used to perform the operation steps of the method according to any one of claims 1 to 8.10.一种通信设备,其特征在于,所述通信设备包括存储器和处理器,所述存储器用于存储一组计算机指令;当所述处理器执行所述一组计算机指令时,执行上述权利要求1-8中任一所述的方法的操作步骤。10. A communication device, characterized in that the communication device comprises a memory and a processor, the memory is used to store a set of computer instructions; when the processor executes the set of computer instructions, the operation steps of any one of the methods described in claims 1-8 are executed.11.一种通信系统,其特征在于,所述通信系统包括控制器和至少两个计算节点,所述至少两个计算节点部署有多个实例,所述控制器用于执行上述权利要求1-8中任一所述的方法的操作步骤,所述至少两个计算节点用于接收所述控制器发送的配置表项。11. A communication system, characterized in that the communication system includes a controller and at least two computing nodes, the at least two computing nodes are deployed with multiple instances, the controller is used to execute the operating steps of any of the methods described in claims 1-8 above, and the at least two computing nodes are used to receive configuration table items sent by the controller.12.一种可读存储介质,其特征在于,所述可读存储介质包括计算机程序或指令,当所述计算机程序或指令在计算机上运行时,使得所述计算机执行上述权利要求1-8中任一所述的方法的操作步骤。12. A readable storage medium, characterized in that the readable storage medium comprises a computer program or instructions, and when the computer program or instructions are executed on a computer, the computer is enabled to execute the operation steps of any one of the methods described in claims 1-8.
CN202310980555.6A2023-05-162023-08-04Communication method, device, equipment, system and readable storage mediumPendingCN119011386A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
PCT/CN2024/093508WO2024235272A1 (en)2023-05-162024-05-15Communication method and apparatus, device, system, and readable storage medium

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN2023105518062023-05-16
CN20231055180692023-05-16

Publications (1)

Publication NumberPublication Date
CN119011386Atrue CN119011386A (en)2024-11-22

Family

ID=93469588

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310980555.6APendingCN119011386A (en)2023-05-162023-08-04Communication method, device, equipment, system and readable storage medium

Country Status (2)

CountryLink
CN (1)CN119011386A (en)
WO (1)WO2024235272A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10778534B2 (en)*2018-06-132020-09-15Juniper Networks, Inc.Virtualization infrastructure underlay network performance measurement and monitoring
CN111163060B (en)*2019-12-112021-12-24中盈优创资讯科技有限公司Application group-based forwarding method, device and system
CN113741924B (en)*2020-05-282023-02-24中国移动通信集团浙江有限公司Application deployment method, system and server
CN111638961A (en)*2020-06-042020-09-08中国工商银行股份有限公司Resource scheduling system and method, computer system, and storage medium
CN114356493B (en)*2021-11-262025-08-19阿里巴巴创新公司Communication method, device and processor between virtual machine instances of cross-cloud server

Also Published As

Publication numberPublication date
WO2024235272A1 (en)2024-11-21

Similar Documents

PublicationPublication DateTitle
US12265811B2 (en)Self-moving operating system installation in cloud-based network
US11368385B1 (en)System and method for deploying, scaling and managing network endpoint groups in cloud computing environments
US9450783B2 (en)Abstracting cloud management
US8271653B2 (en)Methods and systems for cloud management using multiple cloud management schemes to allow communication between independently controlled clouds
US9692707B2 (en)Virtual resource object component
CN107181808B (en) A kind of private cloud system and operation method
US8316125B2 (en)Methods and systems for automated migration of cloud processes to external clouds
US9999030B2 (en)Resource provisioning method
Nurmi et al.The eucalyptus open-source cloud-computing system
US9602335B2 (en)Independent network interfaces for virtual network environments
CN111683074A (en) A NFV-based secure network architecture and network security management method
WO2017045471A1 (en)Method and apparatus for acquiring service chain information in cloud computing system
US9830183B2 (en)Data center resource allocation system and data center resource allocation method
US11870647B1 (en)Mapping on-premise network nodes to cloud network nodes
CN112087311B (en)Virtual network function VNF deployment method and device
EP4471594A1 (en)Multiple connectivity modes for containerized workloads in a multi-tenant network
CN114461303A (en) A method and apparatus for accessing services within a cluster
WO2023066224A1 (en)Method and apparatus for deploying container service
CN119473334B (en) Implementation method and system for multi-dimensional management and automated deployment
US9417900B2 (en)Method and system for automatic assignment and preservation of network configuration for a virtual machine
Rabah et al.A service oriented broker-based approach for dynamic resource discovery in virtual networks
US12363189B2 (en)Computing cluster load balancer
KR20190001891A (en)Apparatus for generating and providing cloud infra node for ICT service and method thereof
US12438842B2 (en)High-availability egress access with consistent source IP addresses for workloads
WO2025000179A1 (en)Creation of namespace-scoped virtual private clouds for containerized workloads in a multi-tenant network

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication

[8]ページ先頭

©2009-2025 Movatter.jp