The present application claims priority from the chinese patent application filed on day 15 of 2023, 05, with application number 202310551806.9, entitled "communication method, apparatus, device, system, and readable storage medium", the entire contents of which are incorporated herein by reference.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a and b and c, wherein a, b and c may be single or plural.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The data transmission method provided by the embodiment of the application can be applied to a cloud platform management scene in the field of cloud computing. The following is a brief description of techniques that may be involved in the present application.
(1) Software defined network
The software defined network separates the control plane and the data forwarding plane of the network, so that the programmable control bottom hardware is realized through a software platform in the centralized controller, and flexible on-demand allocation of network resources is realized. In an SDN network, network equipment is only responsible for simple data forwarding and can adopt general hardware; the original operating system in charge of control is extracted into an independent network operating system in charge of adapting different service characteristics, and the communication between the network operating system and the service characteristics and between the network operating system and the hardware devices can be realized through programming.
(2) Cloud platform
Cloud platform (cloud computing platform), also referred to as a cloud computing platform, refers to services based on hardware resources and software resources, providing computing, networking, and storage capabilities, i.e., a platform provider combines cloud (remote hardware resources) and computing (remote software resources) together to form a platform that provides various services to users.
Services provided by the cloud platform may include software as a service (SaaS), platform as a service (platform AS A SERVICE, paaS), infrastructure as a service (IaaS), and the like. The SaaS is that a service provider uniformly deploys application software on a server of the cloud platform, and a user orders the application software service to the service provider through the Internet according to requirements. PaaS is to provide the development environment as a service, namely, a service provider provides the development environment, a server platform, hardware resources and other services to users, and the users develop application programs in the development environment provided by the cloud platform and share the application programs to other users through the cloud platform and the Internet. IaaS is a service provider that provides a cloud infrastructure consisting of multiple servers as a service to users, i.e., integrating memory, input/output devices, storage and computing resources into a virtual resource pool to provide storage resources and virtualization servers to users, etc.
In a cloud platform built based on SDN technology, a control plane of the cloud platform is equivalent to a human brain and is responsible for the dominant scheduling of the whole branch network. The cloud end realizes full-branch network including equipment online, configuration issuing, operation and maintenance monitoring and network optimization full-life-cycle management, unified management and unified operation and maintenance, and intelligent analysis and diagnosis are performed according to the running state of the full-network, so that service influence time is shortened, and service stability is improved.
(3) Configuration table entry
In the cloud platform, when a tenant creates a network, applies for resources, and changes the network, configuration needs to be issued to the data plane device through the control plane. For example, as shown in fig. 1, when a tenant chooses to implement service cloud deployment using a cloud platform, it first needs to operate in a user interface, create a virtual private cloud (virtual private cloud, VPC) through an application program interface (application programinterface, API), and create one or more subnets, and purchase or lease one or more Virtual Machines (VMs) or containers or other instances configured in the subnets, the controller 110 of the control plane needs to be located at different computing nodes (such as the computing node 120, the computing node 130 shown in fig. 1), and the user purchase or lease instance may be distributed to the different computing nodes. The multiple instances are configured in the same virtual private cloud through a virtualization technology, and configuration list items of all the instances in the virtual private cloud are stored in the controller. The controller needs to issue configuration table items of network cards corresponding to all the instances in the virtual private cloud to virtual switches of different computing nodes, so that the instances of the different computing nodes can communicate with each other based on the configuration table items.
Wherein the configuration table items comprise a location table, an access control table (access control lists, ACL), a security group, and the like, and after the configuration table items are issued to the data plane, the device of the data plane can communicate with one or more instances to perform the functions intended by the tenant.
In a cloud primary scenario, in order to meet the needs of a large number of clients, a large number of containers are typically deployed in each virtual private cloud, and as the client traffic progresses, the containers may reach tens of millions or even hundreds of millions. For such large-scale virtual private cloud requirements, the corresponding configuration list items to be issued are massive, the containers are elastically stretched along with the change of the service amount, and the effective speed of the related configuration list items directly influences the service of the client.
At present, some solutions for issuing configuration list items, such as full-volume issuing, on-demand issuing and the like, have the problems of long effective time of configuration, prolonged initial network packet and the like, so that the problem of lower efficiency of issuing the configuration list items exists.
The full-scale delivery of configuration entries is described below in conjunction with FIG. 2.
As shown in fig. 2, the controller 210 maintains all configuration entries required for the cloud network data plane, and when a user performs operations such as creating and configuration changing on the network of the virtual private cloud, the controller issues the configuration entries to the data plane (computing node 220, computing node 230, and computing node n are shown in fig. 2), that is, all relevant virtual switches. Wherein the compute node 220 is provided with a virtual switch 1 and runs instance 1. The computing node 230 is provided with a virtual switch 2, and runs instance 2 and instance 3. The computing node n is provided with a virtual switch n and runs an instance n-1 and an instance n.
User-created instance 1 is an instance deployed at computing node 220 for the first time in the corresponding virtual private cloud, and controller 210 is required to issue a full amount of configuration entries to computing node 220, so that virtual switch 1 can forward traffic sent by instance 1 to the destination instance, as in example 2. The controller 210 needs to issue a configuration entry corresponding to the instance 1 to the virtual switch 2 in the computing node 230, and the virtual switch 2 can forward the message sent by the instance 2 to the instance 1 in the computing node 220.
The full volume down step of the configuration table entry in fig. 2 may include the following steps 201-204.
In step 201, a user creates an instance 1 through a user interface, and the controller 210 executes creation of a network card in the virtual private cloud and binds the network card to the instance 1.
Step 202, the controller 210 calculates configuration entries that need to be issued to virtual switches of different computing nodes of the data plane by the current request.
In step 203, the controller 210 issues all configuration entries in the vpn to the virtual switches of the computing nodes 220, and issues the configuration entries of the instance 1 to the virtual switches of other relevant computing nodes.
In step 204, after the configuration entries in the virtual switch of other relevant computing nodes take effect, the traffic interaction between the instance 1 and other instances (e.g., instance 2, instance n, etc.) is implemented.
In step 203, all configuration entries are issued to the virtual switch of the computing node 220, and when the number of configuration entries is large, the effective time reaches tens of seconds or even minutes, so that the entries corresponding to the instance to be accessed by the instance 1 take a relatively late effect, and the access of the instance 1 and the instance can be successful only by a long time, thereby affecting the user experience.
In a cloud platform created based on an SDN technology, the main reason that the effective time of configuration items is long is that whether the configuration items are cold start scenes or not cannot be distinguished for high-order services for arranging the virtual private cloud, namely, the configuration items are issued to a computing node for the first time, so that the configuration items of the virtual private cloud are issued to the computing node in a full quantity, and the arrangement needs clear waiting effective time, so that the overall communication efficiency is low.
The on-demand manner of issuing configuration entries is described next in connection with fig. 3.
As shown in fig. 3, the controller 310 maintains all configuration entries required by the cloud network data plane (computing nodes 320 and 330 are shown in fig. 3), and when a user performs operations such as creating and configuring changes on the network of the virtual private cloud through the user interface, the controller 310 issues the configuration entries to the forwarding node 340, and the forwarding node 340 records all configuration entries. The functionality of forwarding node 340 to issue configuration entries as needed here may be implemented by a control plane or a data plane. Wherein the compute node 320 is provided with virtual switch 1 and runs instance 1. The compute node 330 is provided with virtual switch 2 and runs instance 2.
When accessing between the instances, the virtual switch matches the default table entry and sends the default table entry to the forwarding node 340, the forwarding node 340 agents forward the traffic to the destination instance, and the corresponding configuration table entry is issued to the virtual switch. The inter-instance subsequent access traffic is forwarded directly by the virtual switch of the compute node without going through the forwarding node 340.
The on-demand down step of the configuration entry in fig. 3 may include the following steps 301-305.
In step 301, a user creates an instance 1 through a user interface, and the controller 310 executes creation of a network card in the virtual private cloud and binds the network card to the instance 1.
Step 302, the controller 310 sends the configuration table entry of example 1 to the forwarding node 340, and all configuration table entries are recorded in the forwarding node 340.
Step 303, when the instance 1 accesses the instance 2 for the first time, the virtual switch of the computing node 320 matches the default entry, and sends the traffic to the forwarding node 340.
Step 304, the forwarding node 340 sends the traffic to the instance 2 after matching the configuration table entry according to the traffic, and sends the instance 2 configuration table entry to the virtual switch of the computing node 320.
Step 305, when instance 1 accesses instance 2 again, computing node 320 matches the configuration entry of instance 2 on the virtual switch, sending traffic directly to instance 2 of computing node 330.
In the above steps 303-304, the traffic of the access example 2 of the example 1 is not directly from the computing node 1 to the computing node 2, and the forwarding node needs to be bypassed, which results in a larger first packet delay and affects the overall communication efficiency.
The application provides a communication method, in particular to a communication method for issuing configuration list items of instances requiring traffic interaction of self-deployed instances to a computing node. First, the controller clusters a plurality of instances contained in at least two computing nodes to obtain at least two class clusters, so that at least one instance contained in each class cluster belongs to the same or similar application. And secondly, the controller determines a configuration list item forwarding relation according to the flow interaction information between at least two class clusters, wherein the configuration list item forwarding relation is used for indicating to send configuration list items of other class clusters with flow interaction with the class cluster of the first computing node to the first computing node, so that the configuration list item forwarding relation is associated with the flow interaction between the examples. And then, the controller sends the configuration table items to at least two computing nodes according to the forwarding relation of the configuration table items.
Based on the communication method, through flow interaction among the class clusters to which each instance belongs, each class cluster needing to be accessed is determined, and for each class cluster, a configuration table item is issued to a computing node to which the class cluster needing to be communicated with the class cluster belongs, so that the instances contained in each class cluster can be accessed mutually. In this way, compared with the situation that the total distribution of the configuration list items needs to distribute a large amount of configuration list items to a certain computing node for configuration in a scene of large virtual private cloud scale, the method does not need to distribute the configuration list items of all instances when the certain computing node is cold started. In the application, the configuration list item is not required to be indirectly issued through the forwarding node, and the configuration list item is required to be learned through the forwarding node which records all the configuration list items relative to the on-demand issuing instance of the configuration list item. Therefore, compared with the full-volume issuing and the on-demand issuing modes of the configuration list items, the communication method reduces network communication time delay caused by long time consumption of forwarding the configuration list items or issuing the configuration list items between the instances, and improves the overall communication efficiency.
The following describes in detail the implementation of the embodiment of the present application with reference to the drawings.
Fig. 4 is a schematic diagram of a communication system according to the present application. Communication system 400 may be a cloud platform based on an SDN technology architecture, communication system 400 including a controller 410 and a cluster of computing nodes. Wherein the cluster of computing nodes is in communication with the controller 410 via a network.
Controller 410 may be an SDN controller such as an opendayleight controller, an open network operating system (open network operating system, ONOS) controller, or the like.
The controller 410 is configured to allocate resources and issue configuration for the tenant according to the tenant requirement, thereby building a cloud platform. For example, the controller 410 is configured to perform device on-line, configuration delivery, operation and maintenance monitoring, and network optimization full life cycle management of devices included in the computing node cluster in the network, so as to implement unified management and unified operation and maintenance. Wherein the user demand may be that the user performs a corresponding operation through the user interface to input the controller 410.
As a possible implementation manner, the controller 410 clusters a plurality of instances included in at least two computing nodes to obtain at least two class clusters, then determines a configuration table entry forwarding relationship according to flow interaction information between the at least two class clusters, where the configuration table entry forwarding relationship is used to instruct to issue, to the first computing node, configuration table entries of other class clusters having flow interaction with the class cluster of the first computing node, and then sends the configuration table entries to the at least two computing nodes according to the configuration table entry forwarding relationship.
The computing node cluster includes one or more computing nodes (four computing nodes, namely computing node 420, computing node 430, computing node 440, and computing node 450 are shown in fig. 4).
Each computing node contains one or more instances for implementing the functionality required by the cloud platform tenant. The instance may be an executable unit of an application such as a container.
As one possible implementation, each computing node includes one or more virtual machines, and virtual switches for implementing instance communications between the computing nodes, the virtual switches of each computing node being interconnected (connection lines not shown in fig. 4) and respectively connected to the controller 410.
Optionally, each virtual machine runs one or more containers to implement the functionality of the instance. Each container communicates with the virtual switch through a network card (represented by circles in fig. 4).
The computing nodes in the computing node cluster are configured to receive the configuration table entry issued by the controller 410, and deploy the configuration table entry to the virtual switch.
It should be noted that fig. 4 is only a schematic diagram, and should not be construed as limiting the present application, and other devices may be included in the communication system 400, which are not shown in fig. 4.
The computing nodes in the communication system 400 provided by the present application may be nodes in a cloud platform. The communication system 400 implements the functionality of a compute node based on an infrastructure as a service (IaaS) AS A SERVICE, a platform as a service (PaaS) AS A SERVICE, and a software as a service (software AS ASERVICE, saaS), and provides services (e.g., computing services, etc.) to tenants through the compute node. The computing node may be a service node of the cloud that is virtualized using resources of the communication system 400 (e.g., computing resources and storage resources, etc.).
The layered structure of communication system 400 as a cloud platform is described next in connection with fig. 5.
As shown in fig. 5, iaaS platform 510 is configured to perform a virtualization of all infrastructure resources in communication system 400 to provide virtual resources (e.g., computing resources, network resources, and storage resources) to users in a software-defined manner.
PaaS platform 520 is used to implement the runtime environment and application support functions of communication system 400 so that users can apply for computing units within the quota to run their services instead of virtual resources. Alternatively, the computing unit may be an instance, such as a container, and the communication system 400 deploys and runs the user's code by scheduling the container. It should be noted that the number of containers in PaaS platform 520 may be one or more, only one container being taken as an example in fig. 5.
As one possible implementation, communication system 400 may inject one or more components into a container to enable deployment and execution of code.
The SaaS application 530 is configured to provide a service to a user by composing an application program deployed by the user in an application program interface response manner based on the IaaS platform 510 and the PaaS platform 520, and the application program and the container of the SaaS application 530 may communicate through a web server (web server).
It should be noted that fig. 5 is only a schematic diagram, and should not be construed as limiting the present application, and other modules may be further included in the cloud platform hierarchy of the communication system 400, which is not shown in fig. 5.
The communication method provided by the application is specifically described below with reference to the accompanying drawings.
The steps of the communication method provided by the present application are performed by the controller 410 and the computing nodes in the communication system 400, and steps 610-630 of the communication method provided by the present application are described next in connection with fig. 6.
In step 610, the controller 410 clusters a plurality of instances included in at least two computing nodes to obtain at least two class clusters.
In some possible embodiments, the controller 410 clusters the plurality of instances according to a network configuration of the plurality of instances contained by the at least two computing nodes to obtain at least two class clusters.
Wherein a class cluster may be regarded as an application, i.e. a plurality of instances belonging to one class cluster are a set of instances comprised by one application, the result of the clustering is to divide the plurality of instances comprised by at least two computing nodes into different applications.
For example, as shown in FIG. 7, computing node 420 contains instances 2 and 3, computing node 430 contains instances 1,4, and 7, computing node 440 contains instances 8 and 9, and computing node 450 contains instances 5 and 6. Application 1 consists of instance 2 and instance 3 of compute node 420, application 2 consists of instance 1, instance 4 and instance 7 of compute node 430 and instance 8 of compute node 440, and application 3 consists of instance 5, instance 6 of compute node 450 and instance 9 of compute node 440.
Optionally, the controller 410 clusters the plurality of instances according to a security group, a subnet, a resiliency telescoping group, or a load balancing cluster to which the plurality of instances belong.
For example, instances belonging to one security group, subnet, elastic telescoping group, or load balancing cluster are typically the same or similar applications. The controller 410 divides the instances belonging to the same security group, subnet, elastic expansion group or load balancing cluster into the same class cluster, and the specific steps are described with reference to fig. 10, and are not repeated here.
The load balancing clusters may be application load balancing (application load balancer, ALB) clusters, legacy load balancing (classic load balancer, CLB) clusters, network load balancing (network load balancer, NLB) clusters, and the like, among others.
Optionally, the controller 410 clusters the plurality of instances according to network prefixes or application identifications carried by the plurality of instances.
For example, in the context of access control based on tags (tags), different tags are typically used to distinguish between different access rights, and instances having the same access rights are typically the same or similar applications. The controller 410 divides instances of multiple instances that carry the same access control tag into the same class cluster.
As another example, when address assignment and categorization is performed using no category inter-domain routing (CIDR) using internet protocol version 6 (internet protocol version, ipv 6), instances belonging to the same CIDR address block typically belong to the same or similar applications. The controller 410 divides instances of the network prefix carrying the same classless inter-domain route among multiple instances into the same class cluster.
For another example, in a scenario where an application identifier is recorded in an attribute of a network card, the application identifier in the network card is used to represent an application attribute of an application to which an instance belongs, and the instance with the same application identifier is usually the same or a similar application. The controller 410 divides instances of multiple instances that carry application identifications representing the same application attribute into the same class cluster.
Also, as in the scenario where instances are hosted using a container orchestration engine such as Kubernetes, the Kubernetes uses application identification to indicate which pod make up an application, with instances having the same application identification typically being the same or similar applications. The controller 410 divides the instances carrying the application identifier representing the same Kubernetes into the same class cluster, and specific steps thereof refer to the related description of fig. 9, which is not repeated herein.
The application is not limited to the particular manner in which the controller 410 clusters the instances, which is just an example of the possible embodiments listed herein, and in other possible embodiments, additional clustering schemes may be employed. For example, the controller 410 may also divide the traffic-like instances into a cluster by collecting traffic information and machine learning, and the clustering is not exhaustive.
In step 610, it is determined by clustering operation which instances are included in each application, and then the set of network card information (such as information of internet protocol (internet protocol, IP), media access control address (MEDIA ACCESS control, MAC), virtual private cloud, etc.) corresponding to each application instance, and then in step 620, the controller 410 may determine which network cards have traffic, that is, traffic interaction information between multiple instances, by analyzing traffic information between instances, so as to determine which two applications have traffic interaction, that is, traffic interaction information between at least two clusters, according to the traffic interaction information between multiple instances, so as to determine a configuration table forwarding relationship.
Step 620, the controller 410 determines a configuration table forwarding relationship according to the traffic interaction information between at least two class clusters.
In some possible embodiments, the controller 410 determines traffic interaction information between at least two clusters according to the traffic interaction information between the plurality of instances, and determines a configuration entry forwarding relationship according to the traffic interaction information between at least two clusters.
The configuration list item forwarding relation is used for indicating the computing node to issue the configuration list item to the instance contained in the corresponding class cluster, wherein the corresponding class cluster is the class cluster with traffic interaction with the class cluster to which the instance contained in the computing node belongs.
Taking the clustering result shown in fig. 7 as an example, please refer to fig. 8, the controller 410 determines that the traffic flows exist in the case 2 and case 3 in the application 1 and the case 1 in the application 2 (represented by double-arrow connection lines), and the traffic flows exist in the case 4 and the case 5 and case 6 in the application 2 and the case 3, and determines that the traffic flows exist in the application 1 and the application 2 are interacted (represented by double-arrow connection lines), and the traffic flows in the application 2 and the application 3 are interacted.
As a possible implementation manner, the controller 410 first obtains flow interaction information between multiple instances, and determines that each two instances where flow interaction exists respectively belong to a cluster where flow interaction exists, so as to obtain flow interaction information between at least two clusters.
Alternatively, the controller 410 may determine traffic interaction information between multiple instances through a flow table (e.g., br-int flow table, br-tun flow table, etc.) of the virtual switch of each computing node.
Optionally, the controller 410 may acquire information (such as triplets, quintuples, etc.) of the service traffic through connection tracking entries of each computing node, so as to determine traffic information between network cards of multiple instances, that is, traffic interaction information between multiple instances.
The present application is not limited to the manner in which the controller 410 collects the traffic interaction information between multiple instances, the manner in which the traffic interaction information of an instance is determined based on the flow table and the connection tracking table entry is merely an example of the present application, and the controller 410 may determine the traffic interaction information of an instance according to the traffic related data such as the flow log.
As a possible implementation manner, the controller 410 determines a configuration table entry forwarding relationship according to traffic interaction information between at least two class clusters, so as to issue, to at least two computing nodes in the computing node cluster, a configuration table entry of an instance that each computing node needs to access itself according to the configuration table entry forwarding relationship.
Optionally, the configuration table entry forwarding relationship is used to indicate to issue to the first computing node configuration table entries of other class clusters having traffic interactions with the class cluster of the first computing node. Wherein the first computing node is one of at least two computing nodes in a cluster of computing nodes (e.g., computing node 420, computing node 430, computing node 440, or computing node 450).
Optionally, configuring the table entry forwarding relationship includes: and issuing a configuration table item of an instance contained in a class cluster corresponding to the first computing node, wherein the corresponding class cluster comprises class clusters with traffic interaction with the class cluster to which the instance contained in the first computing node belongs.
Optionally, the corresponding class cluster further includes a class cluster to which the instance included in the first computing node belongs.
Continuing taking the traffic interaction between applications in fig. 8 as an example, according to the traffic interaction information of application 1, application 2, and application 3, and the applications and the computing nodes to which each of examples 1-9 belongs, the partitioning result of the configuration table entry forwarding relationship may be as follows:
Illustratively, the computing node 420 includes an instance 2 and an instance 3, where the application 1 to which the instance 2 and the instance 3 belong has traffic interaction with the application 2, and the class cluster corresponding to the computing node 420 is the application 2. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained in application 2, i.e., configuration entries for instance 1, instance 4, instance 7, and instance 8, to the compute node 420.
Illustratively, the computing node 430 includes an instance 1, an instance 4, and an instance 7, and an application 2 to which the instance 1, the instance 4, and the instance 7 belong has traffic interactions with the application 1 and the application 3, respectively, and the class clusters corresponding to the computing node 430 are the application 1 and the application 3. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained by application 1 and application 2, i.e., configuration entries for instance 2, instance 3, instance 5, instance 6, and instance 9, to the compute node 430.
Illustratively, the computing node 440 includes an instance 8 and an instance 9, where the application 2 to which the instance 8 belongs has traffic interaction with the application 1 and the application 3, and the application 3 to which the instance 9 belongs has traffic interaction with the application 2, and the class clusters corresponding to the computing node 440 are the application 1, the application 2 and the application 3. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained by application 1, application 2, and application 3, i.e., configuration entries for instance 1-instance 9, to the compute node 440.
Illustratively, the computing node 450 includes an instance 5 and an instance 6, where the application 3 to which the instance 5 and the instance 6 belong has traffic interaction with the application 2, and the class cluster corresponding to the computing node 450 is the application 2. The configuration list item forwarding relation comprises the following steps: the controller 410 issues configuration entries for the instances contained in application 2, i.e., configuration entries for instance 1, instance 4, instance 7, and instance 8, to the compute node 450.
Illustratively, configuring the table entry forwarding relationship may further include: the controller 410 issues to the first computing node a configuration entry for an instance contained by the first computing node. For example, controller 410 issues configuration entries for instance 2 and instance 3 to compute node 420, and controller 410 issues configuration entries for instance 1, instance 4, instance 7, and instance 8 to compute node 430.
In step 620, the controller 410 determines the configuration table to be issued according to the real flow between the instances, where the configuration table of each instance is determined by the controller 410 according to the configuration information of the network card corresponding to each instance stored in the controller.
Step 630, the controller 410 sends configuration entries to at least two computing nodes according to the configuration entry forwarding relationship.
In some possible embodiments, the controller 410 issues configuration entries of instances included in class clusters corresponding to at least two computing nodes in the computing node cluster to the at least two computing nodes according to a configuration entry forwarding relationship.
Optionally, the controller 410 issues the configuration entry to the computing node, which means that the controller 410 issues the configuration entry to a virtual switch of the computing node, so that the computing node can communicate with an instance corresponding to the configuration entry through the virtual switch based on the configuration entry.
Based on the above-described steps 610-630 of the communication method, the controller 410 determines configuration entries for instances that need to be issued to each computing node based on traffic interaction information between different classes of clusters. In this way, the controller 410 only needs to issue configuration items of the instances that the computing node needs to communicate to each computing node, and compared with the total issue mode of the configuration items, the issue total amount of the configuration items is reduced, and it is not necessary to wait for the configuration items in the virtual switch of the computing node receiving the configuration items of all the instances in the cloud platform to implement flow intercommunication between the instances after the configuration items take effect, so that network delay of communication between the instances of the cloud platform is reduced. Meanwhile, the configuration table entry sent by the controller 410 to the computing node includes the configuration table entries of all the examples included in the other class clusters needing to be communicated in the class cluster to which the example of the computing node belongs, so that the deployment of the configuration table entry is completed before the partial example of the computing node interacts with the partial examples of the other class clusters, and compared with the on-demand issuing mode of the configuration table entry, the access flow among the examples does not need to be forwarded by a forwarding node, thereby avoiding the first packet delay, reducing the network delay of the communication among the examples of the cloud platform and improving the overall communication efficiency among the examples.
The communication method of the present application is integrally described above with reference to fig. 6-8, and a specific clustering step and a specific configuration table entry issuing manner of the communication method in a cloud native application scenario are described below with reference to fig. 9.
As shown in fig. 9, taking the cloud native application scenario as an example of a cluster hosting type container product, highly scalable, high-performance enterprise-level Kubernetes clusters are provided by cloud vendors, supporting running containers (also referred to as instances) and easily deploying, managing and expanding containerized applications.
The communication system of the cloud native application scenario includes a VPC controller 910, a Kubernetes controller 920, and a computing node cluster including one or more computing nodes (computing nodes 930, 940, 950, and 960 are shown in fig. 9). The VPC controller 910 is connected to a Kubernetes controller 920, and the Kubernetes controller 920 is connected to a computing node cluster comprising one or more computing nodes (represented by non-arrowed connection lines in fig. 9).
The Kubernetes controller 920 is configured to issue an instance through an API provided by the cloud platform, create a container for an application in the instance, call the VPC controller 910 to create a network resource, and mount a network card or an auxiliary network card in the container.
The application is implemented by a container (also referred to as pod) as shown in fig. 9, which is deployed in a virtual machine of a compute node. For example, as shown in FIG. 9, computing node 930 includes virtual machine 1 and virtual machine 2, computing node 940 includes virtual machine 3 and virtual machine 4, computing node 950 includes virtual machine 5, and computing node 960 includes virtual machine 6. Virtual machine 1is deployed with container 1 and container 2, virtual machine 2 is deployed with container 3, virtual machine 3 is deployed with container 4 and container 5, virtual machine 4 is deployed with container 6, virtual machine 5 is deployed with container 7 and container 8, and virtual machine 6 is deployed with container 9 and container 10.
An application consists of a set of containers of the same service. In fig. 9, container 1, container 6 and container 8 belong to application 1, container 2 and container 4 belong to application 2, container 3, container 5 and container 7 belong to application 3, container 9 belongs to application 4, and container 10 belongs to application 5.
The clustering of containers is illustrated below.
As a first possible implementation, kubernetes controller 920 may populate the application identifier in the network card attribute when creating a network card or an auxiliary network card. For example, consider application 1, which consists of container 1, container 6, and container 8, as a blue application, then the blue application identification application_id is added to the binding profile attribute. The VPC controller 910 determines the application 1 to which the container belongs by querying the application identifier in the network card attribute of the network card corresponding to the container.
As a second possible implementation, kubernetes controller 920 stores mapping information of applications with a network card or an auxiliary network card. For example, to a remote dictionary service (Remote Dictionary Server, redis) or Etcd layer medium. The VPC controller 910 determines an application to which the container belongs by querying mapping information of a network card corresponding to the container.
As a third possible implementation, kubernetes controller 920 manages the containers of the computing nodes through a Kubernetes proxy (agent) for each computing node that configures the application identification at the network card or auxiliary network card to which the virtual switch is connected. The VPC controller 910 determines the application to which the container belongs by querying the application identifier in the network card connected to the virtual switch corresponding to the container.
The following describes the configuration entry issuing method specifically.
As can be seen from the communication relationship between the containers shown in fig. 9, when the application elastic expansion generating container a to which the container 1, the container 6 and the container 8 belong is that there is traffic flow between the application 1 to which the container 1, the container 6 and the container 8 belong and the application 3 to which the container 3, the container 5 and the container 7 belong (the traffic flow between the containers is shown by the double-headed arrow connection dashed line in fig. 9), the VPC controller 910 needs to issue configuration entries of the container 3, the container 5 and the container 7 to the computing node 950, and issue configuration entries of the container 8 to the computing node 930 and the computing node 940.
After the communication method in the cloud native application scenario is described above with reference to fig. 9, a specific clustering step and a specific configuration table entry issuing manner of the communication method in the traditional application scenario are described next with reference to fig. 10.
As shown in fig. 10, the client purchases or rents VPC, virtual machine, load Balancing (LB) load, etc. through the cloud vendor web page, and the client deploys its own application in the virtual machine.
The communication system of the conventional application scenario comprises a VPC controller 1010 and a computing node cluster comprising one or more computing nodes (computing nodes 1020, 1030, 1040 and 1050 are shown in fig. 10). Wherein the VPC controller 1010 is respectively connected to each computing node in the cluster of computing nodes (indicated by the non-arrowed connection lines in fig. 10).
For example, as shown in fig. 10, computing node 1020 includes virtual machine 1, virtual machine 2, and virtual machine 3, computing node 1030 includes virtual machine 4, virtual machine 5, and virtual machine 6, computing node 1040 includes virtual machine 7 and virtual machine 8, and computing node 1050 includes virtual machine 9.
The clustering of containers is illustrated below.
After purchasing or renting a virtual machine, a client typically places the same application or similar applications into the same security group, i.e., the security access policies of the same application are the same, or the IP of the same application is placed into one address group, and security group rules between the two address groups are configured. For example, virtual machine 1, virtual machine 6, and virtual machine 8 have the same security group configuration 1, virtual machine 2, and virtual machine 4 have the same security group configuration 2, virtual machine 3, virtual machine 5, and virtual machine 7 have the same security group configuration 3, and virtual machine 9 has one security group configuration 4.
The VPC controller 1010 performs clustering according to security group configuration, for example, the VPC controller 1010 determines that the virtual machine 1, the virtual machine 6 and the virtual machine 8 have the same security group configuration 1, the virtual machine 2 and the virtual machine 4 have the same security group configuration 2, the virtual machine 3, the virtual machine 5 and the virtual machine 7 have the same security group configuration 3, the virtual machine 9 has one security group configuration 4, and then clusters the virtual machine 1, the virtual machine 6 and the virtual machine 8 as an application 1, clusters the virtual machine 2 and the virtual machine 4 as an application 2, clusters the virtual machine 3, the virtual machine 5 and the virtual machine 7 as an application 3, and clusters the virtual machine 9 as an application 4 according to different security group configurations.
The following describes the configuration entry issuing method specifically.
When the application 1 needs to add the virtual machine a, it is known that, according to the communication relationship between the containers shown in fig. 10 (indicated by the double-arrow dashed connection line in fig. 10), there is traffic interaction between the application 1 and the application 3, traffic interaction between the application 2 and the application 4, and traffic interaction between the application 2 and the application 3. Accordingly, VPC controller 1010 needs to issue configuration entries for virtual machine 4, virtual machine 6, virtual machine 7, and virtual machine 8 to computing node 1020, issue configuration entries for virtual machine 2, virtual machine 3, virtual machine 8, and virtual machine 9 to computing node 1030, issue configuration entries for virtual machine 2, virtual machine 3, and virtual machine 6 to computing node 1040, and issue configuration entries for virtual machine 4 to computing node 1050.
The communication method provided by the present application is described in detail above in connection with fig. 6-10. A communication apparatus for performing the communication method provided by the present application is described in detail below with reference to fig. 11.
Fig. 11 is a schematic structural diagram of a communication device according to the present application. The communication device can be used for realizing the functions of the corresponding communication equipment in the method embodiment, so that the communication device also has the beneficial effects of the method embodiment. In this embodiment, the communication device may be the controller 410 shown in fig. 4, or may be a module (such as a chip) applied to a server.
As shown in fig. 11, the communication device 1100 includes a clustering module 1110, a matching module 1120, and a transceiving module 1130.
In some possible embodiments, the communication device 1100 may be configured to implement the functions of the controller 410 in the method embodiment shown in fig. 6, where each module included in the communication device 1100 is specifically configured to implement the functions described below.
The clustering module 1110 is configured to cluster a plurality of instances included in at least two computing nodes to obtain at least two class clusters, where each class cluster in the at least two class clusters includes at least one instance. For example, the clustering module 1110 is configured to perform step 610 as shown in fig. 6.
The matching module 1120 is configured to determine a configuration table entry forwarding relationship according to traffic interaction information between at least two class clusters, where the configuration table entry forwarding relationship is configured to instruct to issue, to a first computing node, a configuration table entry of another class cluster having traffic interaction with the class cluster of the first computing node, and the first computing node is one of the at least two computing nodes. For example, the matching module 1120 is configured to perform step 620 as shown in fig. 6.
The transceiver module 1130 is configured to send the configuration table entry to at least two computing nodes according to the configuration table entry forwarding relationship. For example, transceiver module 1130 is configured to perform step 630 as shown in fig. 6.
As one possible implementation, configuring the table entry forwarding relationship includes: and issuing a configuration table item of an instance contained in a class cluster corresponding to the first computing node, wherein the corresponding class cluster comprises class clusters with traffic interaction with the class cluster to which the instance contained in the first computing node belongs.
As a possible implementation manner, the corresponding class cluster further includes a class cluster to which the instance included in the first computing node belongs.
As a possible implementation manner, the communication device further includes a traffic processing module, configured to determine traffic interaction information between at least two clusters according to the traffic interaction information between the multiple instances.
Optionally, the flow processing module is specifically configured to: acquiring flow interactive information among a plurality of examples; and determining that each two examples with traffic interaction respectively belong to the class clusters, and obtaining traffic interaction information between at least two class clusters.
As one possible implementation, the clustering module 1110 is specifically configured to: and clustering the multiple instances according to network configuration of the multiple instances contained in the at least two computing nodes to obtain at least two class clusters.
Optionally, the clustering module 1110 is specifically configured to: and clustering the instances belonging to the same security group, subnet, elastic expansion group or load balancing cluster in the multiple instances to obtain at least two class clusters.
Optionally, the clustering module 1110 is specifically configured to: and clustering the instances carrying the same network prefix or application identifier in the multiple instances to obtain at least two class clusters, wherein the application identifier comprises an identifier used for representing the application attribute in the network card and an identifier used for dividing the application to which the container belongs by the container arrangement engine.
It should be appreciated that the communication device 1100 according to the embodiments of the present application may be implemented by a central processing unit (central processing unit, CPU), an ASIC, or a programmable logic device (programmable logic device, PLD), which may be a complex program logic device (complex programmable logical device, CPLD), an FPGA, a generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof. When the communication apparatus 1100 implements the communication method shown in fig. 6 by software, the communication apparatus 1100 and its respective modules may be software modules.
It should be understood that the controller 410 and the like in the embodiments of the present application may correspond to the communication device 1100 in the application embodiments, and may correspond to a respective main body performing the method according to the embodiments of the present application, and the foregoing and other operations and/or functions of each module in the communication device 1100 are respectively for implementing the respective flows of the method in fig. 6, and are not repeated herein for brevity.
The present application also provides a communication device 1200 comprising a memory 1201, a processor 1202, a communication interface 1203 and a bus 1204. Wherein the memory 1201, the processor 1202 and the communication interface 1203 are communicatively coupled to each other via a bus 1204. The communication device 1200 may be the controller 410 of fig. 6, etc.
Memory 1201 may be a read only memory, a static storage device, a dynamic storage device, or a random access memory. The memory 1201 may store computer instructions and data sets required to execute the computer instructions, the processor 1202 and the communication interface 1203 being configured to perform any of the steps of the communication method shown in fig. 6 when the computer instructions stored in the memory 1201 are executed by the processor 1202.
The processor 1202 may employ a general-purpose central processor, application Specific Integrated Circuit (ASIC), graphics processor (graphics processing unit, GPU), or any combination thereof. The processor 1202 may include one or more chips. The processor 1202 may include an AI accelerator, such as a neural network processor (neural processing unit, NPU). In addition, in fig. 12, a case where each communication device 1200 includes one processor 1202 is taken as an example, in a specific implementation, the number and types of the processors 1202 in each communication device 1200 may be set according to service requirements, one or more processors may be included in the same communication device 1200, and when multiple processors are included in the same communication device 1200, the present application is not limited to the types of the processors.
The communication interface 1203 uses a transceiver module, such as, but not limited to, a transceiver, to enable communication between the communication device 1200 and other devices or communication networks. For example, a management message, an isolation request, etc. may be received or sent through the communication interface 1203.
The bus 1204 may include a path for transferring information between various components of the communication device 1200 (e.g., the memory 1201, the processor 1202, the communication interface 1203).
Communication paths may be established between each of the communication devices 1200 described above through a communication network as shown in fig. 12. Any of the communication devices 1200 may be a computer in a distributed storage system (e.g., a server), or a computer in an edge data center, or a terminal communication device.
Each communication device 1200 may have disposed thereon the functions of the communication apparatus 1100, for example, performing any of the steps shown in fig. 6, or performing the functions of the modules in the communication apparatus 1100.
The method steps in this embodiment may be implemented by hardware, or may be implemented by executing software instructions by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable ROM (erasable PROM, EPROM), electrically Erasable Programmable ROM (EEPROM), registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a terminal device. The processor and the storage medium may reside as discrete components in a network device or terminal device.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; but also optical media such as digital video discs (digital video disc, DVD); but also semiconductor media such as Solid State Drives (SSDs) STATE DRIVE. While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.