CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims benefit of and priority to International Patent Application No. PCT/CN2022/135824, filed Dec. 1, 2022, entitled “TECHNIQUES FOR APPLYING A NAMED PORT SECURITY POLICY,” and assigned to the assignee hereof, the contents of each of which are hereby incorporated by reference in its entirety.
BACKGROUNDSoftware defined networking (SDN) involves a plurality of hosts in communication over a physical network infrastructure of a data center (e.g., an on-premise data center or a cloud data center). The physical network to which the plurality of physical hosts are connected may be referred to as an underlay network. Each host has one or more virtualized endpoints such as virtual machines (VMs), containers, Docker containers, data compute nodes, isolated user space instances, namespace containers, and/or other virtual computing instances (VCIs), that are connected to, and may communicate over, logical overlay networks. For example, the VMs and/or containers running on the hosts may communicate with each other using an overlay network established by hosts using a tunneling protocol.
As part of an SDN, any arbitrary set of VMs in a datacenter may be placed in communication across a logical Layer 2 (L2) overlay network by connecting them to a logical switch. A logical switch is an abstraction of a physical switch that is collectively implemented by a set of virtual switches on each host that has a VM connected to the logical switch. The virtual switch on each host operates as a managed edge switch implemented in software by a hypervisor on each host. Virtual switches provide packet forwarding and networking capabilities to VMs running on the host. In particular, each virtual switch uses hardware based switching techniques to connect and transmit data between VMs on a same host, or different hosts.
Further, in some cases, multiple applications packaged into one or more groups of containers, referred to as pods, may be deployed on a single VM or a physical machine. The single VM or physical machine running a pod may be referred to as a node running the pod. In particular, a container is a package that relies on virtual isolation to deploy and run applications that access a shared operating system (OS) kernel. From a network standpoint, containers within a pod share a same network namespace, meaning they share the same internet protocol (IP) address or IP addresses associated with the pod.
A network plugin, such as a container networking interface (CNI) plugin, may be used to create virtual network interface(s) usable by the pods for communicating on respective logical networks of the SDN infrastructure. In particular, the CNI plugin is a runtime executable that configures a network interface, referred to as a CNI, into a container network namespace. The CNI plugin is further configured to assign a network address (e.g., an IP address) to each created network interface (e.g., for each pod) and may also add routes relevant for the interface. Pods can communicate with each other using their respective IP addresses. For example, packets sent from a source pod to a destination pod may include a source IP address of the source pod and a destination IP address of the destination pod, so that the packets are appropriately routed over a network from the source pod to the destination pod.
Traffic for a particular application, such as a particular container running the application, within a pod may be addressed using aLayer 4 destination port number associated with the application/container. For example, different containers within a pod may listen to specific destination port numbers, such that any particular container within a particular pod can be addressed using the IP address of the particular pod in conjunction with the port number (also referred to as a “port number,” “port,” or “container port”) associated with the particular container. Accordingly, the packets may further include a source port number and a destination port number. The source port number may identify a particular source, such as a particular source container within the pod associated with the source IP address. Further, the destination port number may identify a particular destination, such as a particular destination container within the pod associated with the destination IP address. The port number may be considered a transport layer (e.g., Layer 4) address to differentiate between applications (e.g., containers running applications) or other service endpoints. The port number may refer to a transmission control protocol (TCP) or a user datagram protocol (UDP) port, or the like.
Communication between pods of a VM may be accomplished via use of a virtual switch implemented in the VM. The virtual switch may include one or more virtual ports (Vports) that provide logical connection points between pods. For example, a CNI of a first pod and a CNI of a second pod may connect to Vport(s) provided by the virtual switch to allow for communication between the first and second pods. In this context “connect to” refers to the capability of conveying network traffic, such as individual network packets, or packet descriptors, pointers, identifiers, etc., between components so as to effectuate a virtual datapath between software components.
It should be noted that there is no direct correlation between a virtual port (Vport) on a virtual switch and a physical port on a physical switch not related to TCP or UDP port numbers included in a packet header as discussed above. Physical switches include physical ports for connecting ethernet cables allowing the switch to direct packets from one port to another and thus from one device or endpoint to another. Vports are analogous to a physical ports, except that they are implemented in software to connect software elements and allow packets to be exchanged between them. Accordingly, each of a Vport and a physical port is different than a TCP or UDP port. A port number, as used herein unless explicitly stated otherwise herein, refers to aLayer 4 address (of the OSI model) used to differentiate between, for example, applications or other service endpoints, such as a TCP or UDP port number.
Further, a port may be referred to herein in the context of a “port of a pod” or a “port associated with an application running on the pod.” Traffic sent or received from such a “port of a pod” or a “port associated with an application running on the pod” may refer to traffic with a source or destination IP address of the pod and a source or destination port number, respectively, of the port of the application associated with the traffic.
In some cases, to control traffic flow at the IP address and/or port level, one or more network policies may be defined for the pods. Network policies are used to control the traffic in (e.g., ingress) and/or out (e.g., egress) of pods. The network policies may indicate that certain traffic is allowed (e.g., whereby default traffic is not allowed, such as dropped or rejected), or that certain traffic is not allowed (e.g., whereby default traffic is allowed). It should be noted that ingress and/or egress may be defined to have the same default between allowed and not allowed traffic, or may have different defaults for a given pod. For example, by default, all inbound and outbound traffic for a pod may be allowed, and network policies may indicate that certain types of traffic are not allowed. In some cases, the network policies may indicate that certain types of traffic are not allowed by explicitly identifying the traffic which is not allowed. In some other cases, the network policies may indicate that certain types of traffic are not allowed by explicitly limiting traffic which is allowed. Accordingly, ingress and/or egress traffic for the pod may be restricted where a network policy limits such traffic for the pod. In another example, by default, all inbound and outbound traffic for a pod may be restricted, and network policies may indicate that certain types of traffic are allowed. In some cases, the network policies may indicate that certain types of traffic are allowed by explicitly identifying the traffic which is allowed. In some other cases, the network policies may indicate that certain types of traffic are allowed by explicitly limiting traffic which is restricted. As an illustrative example, when a pod is restricted for ingress with a default rule that traffic is not allowed, the only allowed traffic into the pod includes traffic from one or more endpoints explicitly identified as allowed by the ingress rules of the network policy. An endpoint may be a physical endpoint, such as a server, or a virtualized endpoint, such as a pod. In another example, when a pod is restricted for egress with a default rule that traffic is not allowed, the only allowed traffic from the pod includes traffic to other endpoints explicitly identified as allowed by the egress rules of the network policy.
When defining a network policy, in some cases, ports, such as associated with applications running in containers of pods, for which ingress traffic is allowed for the pods may be identified by their port name. The same may be true for egress traffic. A port name is a name given to (1) a port (e.g., a port number) associated with an application running within a single pod or (2) ports associated with applications running within different pods under a same namespace. Configuration files made up of one or more manifests that declare intended system infrastructure (e.g., pods, containers, etc.) and applications to be deployed in the system may specify the port name for each of the different ports, where applicable.
For example, a configuration file may be used to create a web deployment pod corresponding to a first pod and a database deployment pod corresponding to a second pod in a same namespace. A first port number defined in the configuration file for the web deployment pod may be “8080,” and a port name for port number “8080” defined for the web deployment pod may be “server.” A second port number defined in the configuration file for the database deployment pod may also be “8080,” and the port name for port number “8080” defined for the database deployment pod may also be “server.” In an example case where a user defines a security policy that indicates that traffic to ports named “server” may be allowed based on egress rules of the policy, application of the network policy may allow traffic with a destination port number “8080” from a source of the traffic (e.g., a source pod). Such application of the network policy correctly allows traffic with a destination port number “8080” to both the web deployment pod and the database deployment pod, meaning traffic with a destination IP address of either the web deployment pod and the database deployment pod and the destination port number “8080.”
In some cases, however, application of a network policy, which uses named ports when defining ingress and/or egress rules for the policy, may result in allowing traffic to/from one or more pods that were intended to be restricted, or disallowing traffic to/from one or more pods that were intended to be allowed, thereby adversely affecting overall integrity and security of the system. For example, assume a configuration file defines a first port number for the web deployment pod as “8080,” with a port name “server.” Further, assume the configuration file defines a second port number for the database deployment pod as “8080,” with a port name “foo,” or not even defines a second port number for the database deployment pod assigned the port number “8080.” In the example case, where a user defines a security policy that indicates that all traffic to ports named “server” may be allowed based on egress rules of the policy, application of the network policy may allow traffic with a destination port number “8080,” including traffic with a destination port number “8080” and a destination IP address of the database deployment pod. However, port number “8080” is not associated with the port name “server” for the database deployment pod and therefore should not be allowed.
SUMMARYOne or more embodiments provide a method for implementing a network policy in a software defined networking (SDN) environment. The method generally includes receiving a manifest defining a plurality of pods in a namespace. For a first pod, the manifest defines a first environment value for an environment of the first pod, a first port number for a first container of the first pod, and a name for the first port number defined for the first container of the first pod. For a second pod, the manifest defines the first environment value for an environment of the second pod, a second port number for a second port associated with a second container of the second pod, and the name for the second port number defined for the second container of the second pod. Further, the manifest defines a security policy applied to a third pod. The security policy defines a first egress policy indicating the first environment value and the name. Based on the manifest indicating that the first port number is different than the second port number and that the first port number and the second port number share the name, the method further comprises creating separate egress firewall rules for the first pod and the second pod. The separate egress firewall rules include a first egress firewall rule to apply to packets with the third pod as a source, the first pod as a destination, and the first port number as a destination port. The separate egress firewall rules also include a second egress firewall rule to apply to packets with the third pod as a source, the second pod as a destination, and the second port number as a destination port. The method further includes configuring a firewall with the first egress firewall rule and the second egress firewall rule.
Further embodiments include a non-transitory computer-readable storage medium storing instructions that, when executed by a computer system, cause the computer system to perform the method set forth above, and a computer system including at least one processor and memory configured to carry out the method set forth above.
BRIEF DESCRIPTION OF THE DRAWINGSFIG.1 illustrates a computing system in which embodiments described herein may be implemented.
FIGS.2A and2B illustrate an example manifest defining a network policy for a particular namespace having multiple pods, according to an example embodiment of the present disclosure.
FIG.3 illustrates example pods defined by the manifest illustrated inFIGS.2A and2B, according to an example embodiment of the present disclosure.
FIG.4A illustrates an example workflow for determining ingress rules defined in an example network policy, according to an example embodiment of the present disclosure.
FIGS.4B and4C illustrate example ingress rules defined for the example pods illustrated inFIG.3, according to an example embodiment of the present disclosure.
FIG.5A illustrates an example workflow for determining egress rules defined in an example network policy, according to an example embodiment of the present disclosure.
FIGS.5B and5C illustrate example egress rules defined for the example pods illustrated inFIG.3, according to an example embodiment of the present disclosure.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
DETAILED DESCRIPTIONImproved techniques for the application of a network policy that uses named ports when defining ingress and/or egress rules for the policy are described herein. Although described herein with respect to a network policy, the techniques may be similarly applied for the application of other policies, including other security policies that use named ports. The port name may not be a unique name; thus, the same port name may be given to ports (e.g., port numbers) associated with applications running within different pods, irrespective of which deployment and/or namespace the pods belong to. For example, the port name “server” may be given toTCP port 8080 associated with a first application on a first pod and to TCP port 8181 associated with a second application on a second pod.
In certain aspects, a manifest may be created and used to configure network access policies for the pods. As described above, the network policy may (1) identify one or more pods where the policy is applied (referred to as the “applied-to pods”), (2) identify the type of traffic the network policy affects (e.g., ingress and/or egress traffic), (3) the action to apply (e.g., allow, drop (i.e., drop without informing the sender), or reject (i.e., drop and inform the sender the packet was dropped) packets), (4) for ingress policies, identify the pods that are sources of traffic for which the action applies, and (5) for egress, identify the pods that are destinations of traffic for which the action applies. The ingress and/or egress rules defined in the network policy may refer to particular port names to which the policy is applied. As discussed, the port names may be assigned to particular port numbers associated with applications running on pods and ingress and/or egress traffic. Thus, the ingress and/or egress rules may be applied to traffic to/from such ports associated with applications. Where a network policy identifies port numbers by their assigned port name, in addition to recognizing which port numbers have a port name matching the port name identified in the policy, techniques described herein further consider the port number when determining rules for ingress and/or egress. Certain examples herein may be discussed with respect to policies where the action to apply is “allow”, however the techniques described herein also apply to policies with actions of different types, such as drop or reject.
For example, a network policy may identify destination ports for ingress traffic and/or destination ports for egress traffic to which to apply the policy by referring to a particular port name. A destination port to which to apply the policy may refer to a port associated with a destination application on a destination pod to which the traffic is directed. Accordingly, the policy may be applied to traffic with a destination IP address corresponding to the destination pod and a destination port number corresponding to the destination application. Where applications/containers of multiple pods are associated with destination port numbers with port names matching the particular port name, and at least one of the port numbers is different than the other port numbers, then multiple ingress or egress rules may be created. Each rule created may include ports of applications running in pods having a same port name and a same or different port number. Further, each of the sources and destinations defined for each rule created may be precisely limited to an internet protocol (IP) set group which contains a restricted list of IP addresses corresponding to different pods.
As a first illustrative example, a network policy may identify destination ports for egress traffic as ports having a name “secured.” Applications of two different pods may each be assigned a port named “secured.” While one of the ports (e.g., the first port of the first pod) may correspond to a port number “8080,” the other port (e.g., the second port of the second pod) may correspond to a port number “8181.” As such, two rules for egress may be created: a first rule including port number “8080” as the destination port for egress traffic and a second rule including port number “8181” as the destination port for egress traffic. Further, the first rule may identify a first IP set group including the IP address of the first pod as the destination for the egress rule, while the second rule may identify a second IP set group including the IP address of the second pod as the destination for the egress rule.
As a second illustrative example, instead of applications of only two pods having a port assigned a port name “secured,” an application running in a third pod may also exist having a port named “secured.” The port of the third pod (e.g., the third port) may be “8181.” In this example, two rules for egress may also be created. Since two pods share a port number 8181, the second rule may concern traffic directed to both pods. In particular, the first rule applies to egress traffic having a destination port number “8080” while the second rule applies to egress traffic having a destination port number “8181.” Further, the first rule may identify a first IP set group including the IP address of the first pod as the destination for the egress rule, while the second rule may identify a second IP set group including the IP address of the second pod and the IP address of the third pod as the destinations for the egress rule.
In other words, a single rule specifying a particular port number can be applied on traffic that includes that port number even though it is going to different pods. Additionally, separate ingress or egress rules may be created (e.g., based on the defined network policy) for different destination port numbers having a same port name.
Existing rules created based on application of the network policy may be constantly updated. For example, when a new pod is created, where the new pod includes an application associated with a port assigned a port name matching a port name defined in the policy, the rules may be updated to account for the new pod/port such that the rules consider all pods/ports which are identified (e.g., via port name) by the network access policies. Additionally, when an existing port name associated with a port number or an existing port number associated with a port name changes, when pods and/or ports names are removed or added, and/or when labels of resources, such as pods and/or namespaces, are changed, ingress and/or egress rules created based on the network policy may be updated.
As such, the techniques described herein better support the application of a network policy that uses named ports when defining ingress and/or egress rules for the policy. Such techniques help to ensure that network access intended by a user is, in fact, carried out. In other words, traffic which is intended to be restricted for applications of one or more pods is restricted when the network policy is applied. The opposite is true for traffic which is to be allowed. This helps to ensure the security and integrity of the overall system. Additionally, the techniques described herein support a flexible distributed firewall rule definition that is able to handle constantly changing external conditions.
FIG.1 depicts example physical and virtual network components in anetworking environment100 in which embodiments of the present disclosure may be implemented.
Networking environment100 includes adata center101.Data center101 includes one ormore hosts102, amanagement network192, adata network170, anetwork controller174, anetwork manager176, and acontainer orchestrator178.Data network170 andmanagement network192 may be implemented as separate physical networks or as separate virtual local area networks (VLANs) on the same physical network.
Host(s)102 may be communicatively connected todata network170 andmanagement network192.Data network170 andmanagement network192 are also referred to as physical or “underlay” networks, and may be separate physical networks or the same physical network as discussed. As used herein, the term “underlay” may be synonymous with “physical” and refers to physical components ofnetworking environment100. As used herein, the term “overlay” may be used synonymously with “logical” and refers to the logical network implemented at least partially withinnetworking environment100.
Host(s)102 may be geographically co-located servers on the same rack or on different racks in any arbitrary location in the data center. Host(s)102 may be configured to provide a virtualization layer, also referred to as ahypervisor106, that abstracts processor, memory, storage, and networking resources of ahardware platform140 into multiple VMs1041-104x(collectively referred to herein as “VMs104” and individually referred to herein as “VM104”).
Host(s)102 may be constructed on a servergrade hardware platform108, such as an x86 architecture platform.Hardware platform108 of ahost102 may include components of a computing device such as one or more processors (CPUs)116,system memory118, one or more network interfaces (e.g., physical network interface cards (PNICs)120),storage122, and other components (not shown). ACPU116 is configured to execute instructions, for example, executable instructions that perform one or more operations described herein and that may be stored in the memory and storage system. The network interface(s) enablehost102 to communicate with other devices via a physical network, such asmanagement network192 anddata network170.
Hypervisor106 includes avirtual switch140. Avirtual switch140 may be a virtual switch attached to a default port group defined bynetwork manager176 and provide network connectivity to ahost102 and VMs104 on the host. Port groups include subsets of Vports of a virtual switch, each port group having a set of logical rules according to a policy configured for the port group. Each port group may comprise a set of Vports associated with one or more virtual switches on one or more hosts. Vports associated with a port group may be attached to a common VLAN according to the IEEE 802.1Q specification to isolate the broadcast domain.
Avirtual switch140 may be a virtual distributed switch (VDS). In this case, eachhost102 may implement a separate virtual switch corresponding to the VDS, but thevirtual switches140 at eachhost102 may be managed like a single virtual distributed switch (not shown) across thehosts102.
Each of VMs104 running onhost102 may include virtual interfaces, often referred to as virtual network interface cards (VNICs), such asVNICs146, which are responsible for exchanging packets between VMs104 andhypervisor106.VNICs146 can connect toVports144, provided byvirtual switch140.Virtual switch140 also has Vport(s)142 connected to PNIC(s)120, such as to allow VMs104 to communicate with virtual or physical computing devices outside ofhost102 viadata network170 and/ormanagement network192.
Each VM104 may also implement avirtual switch148 for forwarding ingress packets to various entities running within the VM104. For example, the various entities running in within each VM104 may includepods154 includingcontainers130.
In particular, each VM104 implements a virtual hardware platform that supports the installation of aguest OS138 which is capable of executing one or more applications.Guest OS138 may be a standard, commodity operating system. Examples of a guest OS include Microsoft Windows, Linux, and/or the like.
Each VM104 may include acontainer engine136 installed therein and running as a guest application under control ofguest OS138.Container engine136 is a process that enables the deployment and management of virtual instances (referred to interchangeably herein as “containers”) by providing a layer of OS-level virtualization onguest OS138 within VM104.Containers130 are software instances that enable virtualization at the OS level. That is, with containerization, the kernel ofguest OS138, or an OS ofhost102 if the containers are directly deployed on the OS ofhost102, is configured to provide multiple isolated user space instances, referred to as containers.Containers130 appear as unique servers from the standpoint of an end user that communicates with each ofcontainers130. However, from the standpoint of the OS on which the containers execute, the containers are user processes that are scheduled and dispatched by the OS.
Containers130 encapsulate an application, such asapplication132 as a single executable package of software that bundles application code together with all of the related configuration files, libraries, and dependencies required for it to run.Application132 may be any software program, such as a word processing program or a gaming server.
A user can deploycontainers130 throughcontainer orchestrator178.Container orchestrator178 implements an orchestration control plane, such as Kubernetes®, to deploy and manage applications and/or services thereof onhosts102, of a host cluster, usingcontainers130. For example, Kubernetes may deploy containerized applications ascontainers130 and a control plane (not shown) on a cluster of hosts. The control plane, for each cluster of hosts, manages the computation, storage, and memory resources to runcontainers130. Further, the control plane may support the deployment and management of applications (or services) on thecluster using containers130. In some cases, the control plane deploys applications aspods154 ofcontainers130 running onhosts102, either within VMs or directly on an OS of the host. Other type of container-based clusters based on container technology, such as Docker® clusters, may also be considered.
In order for packets to be forwarded to and received bypods154 and theircontainers130 running in a first VM1041, each of thepods154 may be set-up with a network interface, such as aCNI174. TheCNI174 is associated with an IP address, such that thepod154, and eachcontainer130 within thepod154, is addressable by the IP address. Accordingly, after eachpod154 is created, anetwork plugin124 is configured to set up networking for the newly createdpod154 enabling thenew containers130 of thepod154 to send and receive traffic.
Further,network plugin124 may also configurevirtual switch148 running in VM(s)104 (e.g., where the createdpods154 are running) to forward traffic destined for thenew pods154. This allowsvirtual switch148 to forward traffic for thenew pods154. Accordingly, for example, after receiving the traffic fromVNIC146 directed for apod1541,virtual switch148 processes the packets and forwards them (e.g., based on the container's IP address in the packets' header) to apod1541by pushing the packets out fromVport152 onvirtual switch148 connected toCNI1741that is configured for and attached to apod1541. As shown, other CNIs, such asCNI1742, may be configured for and attached to different, existing apods154.
As described in more detail below, ingress and/or egress traffic to one or more applications, such ascontainers130, running inpods154 may be allowed or restricted based on information contained in manifest file(s), defining network access policies, created for thepods154. In certain aspects, filtering of traffic for acontainer130 based on information included in such a manifest occurs atvirtual switch148 within VM104. In certain other aspects, filtering of the traffic for acontainer130 occurs atVNIC146 of VM104. In certain other aspects, filtering of the traffic for acontainer130 occurs atvirtual switch140 inhypervisor106. In certain other aspects, filtering of the traffic for acontainer130 occurs at aCNI174 of apod154 in which thecontainer130 runs.
Data center101 includes a network management plane and a network control plane. The management plane and control plane each may be implemented as single entities (e.g., applications running on a physical or virtual compute instance), or as distributed or clustered applications or components. In alternative aspects, a combined manager/controller application, server cluster, or distributed application, may implement both management and control functions. In the embodiment shown,network manager176 at least in part implements the network management plane andnetwork controller174 at least in part implements the network control plane.
The network control plane is a component of software defined network (SDN) infrastructure and determines the logical overlay network topology and maintains information about network entities such as logical switches, logical routers, endpoints, etc. The logical topology information is translated by the control plane into physical network configuration data that is then communicated to network elements of host(s)102.Network controller174 generally represents a network control plane that implements software defined networks, e.g., logical overlay networks, withindata center101.Network controller174 may be one of multiple network controllers executing on various hosts in the data center that together implement the functions of the network control plane in a distributed manner.Network controller174 may be a computer program that resides and executes in a server in thedata center101, external to data center101 (e.g., such as in a public cloud) or, alternatively, network controller104 may run as a virtual appliance (e.g., a VM) in one ofhosts102.Network controller174 collects and distributes information about the network from and to endpoints in the network.Network controller174 may communicate withhosts102 viamanagement network192, such as through control plane protocols. In certain aspects,network controller174 implements a central control plane (CCP) which interacts and cooperates with local control plane components, e.g., agents, running onhosts102 in conjunction withhypervisors106.
Network manager176 is a computer program that executes in a server innetworking environment100, or alternatively,network manager176 may run in a VM104, e.g., in one ofhosts102.Network manager174 communicates with host(s)102 viamanagement network192.Network manager176 may receive network configuration input from a user, such as an administrator, or an automated orchestration platform (not shown) and generate desired state data that specifies logical overlay network configurations. For example, a logical network configuration may define connections between VCIs and logical ports of logical switches.Network manager176 is configured to receive inputs from an administrator or other entity, e.g., via a web interface or application programming interface (API), and carry out administrative tasks fordata center101, including centralized network management and providing an aggregated system view for a user.
FIGS.2A and2B illustrate anexample manifest200 defining a network policy for a particular namespace having multiple pods, according to an example embodiment of the present disclosure. As described above, manifests may not only be used to declare intended system infrastructure (e.g., pods, containers, etc.) and applications to be deployed in the system, but may also be used to control traffic flow in and/or out of created pods and containers.
As illustrated inFIG.2A,example manifest200 is used to create a namespace named “test-ns,” as shown at202. Further, based onmanifest200, six pods are created to exist within the namespace (e.g., shown at204 inFIG.2A). A first pod (e.g., pod1) may have afirst container1301associated with aport number443 and port name “secured.” A second pod (e.g., pod2) may have asecond container1302associated with aport number 8443 and also port name “secured.” A third pod (e.g., pod3) may not have any containers associated with ports. A fourth pod (e.g., pod4) may have a fourth container associated with aport number 8080 and also port name “secured.” A fifth pod (e.g., pod5) may have a fifth container associated with aport number443 and also port name “secured.” A sixth pod (e.g., pod6) may have a sixth container associated with aport number44 and also port name “secured.” As such, all ports defined inmanifest200 may be assigned a same name (e.g., port name “secured”).
Further, manifest200 may define an environment forpods1,2, and5 as a production environment (e.g., env: prod). Manifest200 may define an environment forpods4 and6 as a database environment (e.g., env: db). Additionally, manifest200 may define an environment forpod3 as a client environment (e.g., env: client).
Pods1-6 and their corresponding containers and associated ports defined bymanifest200 are illustrated inFIG.3. As illustrated, pods1-6 may be in communication with each other over anetwork300. For example, pods1-6 may communicate with each other using their respective IP addresses. Accordingly, pods1-6 may exchange packets overnetwork300. As described above with respect toFIG.1, such communication may occur via use ofvirtual switch148 in VM104 where one or more of pods1-6 are deployed, other virtual switches in VMs where one or more of pods1-6 are deployed, and/or other virtual switches connected to VMs where one or more of pods1-6 are deployed. In some cases, this communication may be restricted based on ingress and/or egress rules defined for pods1-6. For example, communication between pods1-6 (and more specifically, container running within each of these pods) may be restricted according to a network policy defined and contained inexample manifest200, as illustrated inFIG.2B.
Example manifest200 is used to further create a network policy named “named-port-policy,” as shown at206 inFIG.2B. The network policy specifies for a particular set of pods, having containerized applications running therein, ingress and/or egress rules which are to be enforced for the set. In particular, the network policy includes an “appliedTo” field with a “podSelector” subfield which selects a group of pods to which the policy applies (e.g., shown atcode block208 inFIG.2B). The example policy illustrated selects pods with the label “env: prod.” These pods may be referred to herein as the “applied to pods.” Additional details of using an “appliedTo” field are described in U.S. Pat. No. 10,264,021, filed on Dec. 14, 2015, and titled “Method and Apparatus for Distributing Firewall Rules,” which is hereby incorporated by reference herein in its entirety.
For the applied to pods, the network policy defines both ingress rules, denoted incode block210 by field “rules” and sub-field “direction: in” and egress rules, denoted incode block212 by field “rules” and sub-field “direction: out.” In other words, the network policy identifies what pod/port traffic (e.g., packets) to take the specified action for ingress into the applied to pods. Further, the network policy identifies what pod/port traffic (e.g., packets) to take the specified action for egress from the applied to pods. Althoughexample manifest200 illustrates both ingress rules and egress rules defined in the network policy, in certain other embodiments, only ingress rules or only egress rules may be defined. The ingress rules and egress rules inexample manifest200 are rules with the action allow, however, it should be noted that this is merely an example, and the techniques may be applied to rules with other types of actions.
The network policy illustrated contains a single rule for ingress (e.g., shown atcode block210 inFIG.2B). In particular, the network policy includes a “rules” field with a “podSelector” subfield for “direction: in” traffic which selects a group of pods for which ingress is allowed. The example policy illustrated selects pods with the label “env: client.”
Additionally, the network policy contains two rules for egress (e.g., shown atcode block212 inFIG.2B). In particular, the network policy includes a “rules” field with (1) a subfield for “direction: out” traffic which selects a group of pods for which egress is allowed and (2) a “ports” field which further limits the group of pods, selected via the “podSelector” label, to pods that have containers associated with a matching port name identified in the network policy. The example policy illustrated selects pods with the label “env: db” and having a container associated with a port with a port name “secured.”
Application of the network policy, defined inexample manifest200, to pods1-6 (e.g., illustrated inFIG.3 and created via configuration information inexample manifest200 illustrated inFIG.2A) is illustrated inFIGS.4A-4C and5A-5C. For example,FIG.4A illustrates anexample workflow400 for determining ingress rules, based on the network policy illustrated inFIG.2B, to be applied to traffic of one or more pods ofFIG.3. For ease of explanation,workflow400 is described with respect to the example illustrated inFIGS.4B and4C. Additionally,FIG.5A illustrates anexample workflow500 for determining egress rules, based on the network policy illustrated inFIG.2B, to be applied to traffic of one or more pods ofFIG.3. For ease of explanation,workflow500 is described with respect to the example illustrated inFIGS.5B and5C.
As illustrated inFIG.4A,workflow400 beings atblock402, by identifying an existing network policy that is to be applied to traffic of one or more created pods. The network policy identified atblock402 may be network policy “named-port-policy” illustrated inFIG.2B.
Atblock404,workflow400 continues by determining source pod(s) for ingress traffic based on the network policy's “podSelector” sub-field within the “rules” field for “direction: in” traffic. For example, as illustrated at404 inFIG.4B, source pods for ingress traffic identified by the network policy are pods with the label “env: client.” Permanifest200,only pod3 has an environment defined as a client environment (e.g., env: client). As such,pod3 is the only source pod identified by the network policy. This means that traffic frompod3 is the only traffic that is permitted into the applied to pod(s) identified by the network policy.
The applied to pod(s) (e.g., the destination pod(s) for ingress rules) are identified atblock406. In particular, atblock406,workflow400 continues by determining the destination pod(s) for ingress traffic based on the network policy's “appliedTo” field. For example, as illustrated at406 inFIG.4B, destination pods for ingress traffic identified by the network policy are pods with the label “env: prod.” Permanifest200,pods1,2, and5 have an environment defined as a production environment (e.g., env: prod). As such,pods1,2, and5 are the only destination pods identified by the network policy. This means that traffic frompod3 is the only traffic that is permitted intopods1,2, and5, where that traffic has a destination port number assigned a port name “secured,” per the policy, as further discussed.
Atblock408,workflow400 continues by determining whether at least two of the destination pods have a container associated with a port with a same port name. As described above, destination pods having containers associated with a same port number, having a same port name, may be combined into a single rule for ingress. Additionally, separate ingress rules may be created for destination pods having containers associated with different port numbers, but having a same port name. In this example, the first port of the first container ofpod1, the second port of the second container ofpod2, and the fifth port of the fifth container ofpod5 all share a same port name “secured.” Accordingly, atblock408, it is determined that at least two of the destination pods have containers associated with ports with a same name.
Accordingly, atblock412,workflow400 continues by creating an IP-set group for each of the destination pods having containers associated with a same port name and a same port number. Further, atblock414,workflow400 continues by creating an IP-set group for each of the destination pods having containers associated with a unique port name. For example, although the first port, the second port, and the fifth port all share a common port name, the ports are not the same (e.g., the port numbers are different). In particular, only the first port number and the fifth port number are port number “443.” The second port number is port number “8443.” As such, atblocks412 and414, two IP-set groups are created. A first IP-set group (e.g., shown aspolicy group2 inFIG.4C) may include the first port and the fifth port (e.g., port number “443” with port name “secured”), while a second IP-set group (e.g., shown aspolicy group3 inFIG.4C) may include only the second port (e.g., port number “8443” with port name “secured”).
Accordingly, as shown inFIG.4C, when the network policy is applied, two ingress rules may be created. The first ingress rule may permit ingress traffic frompolicy group1 topolicy group2 onTCP port 443, wherepolicy group1 includes pod3 (e.g., the source pod) andpolicy group2 includespods1 and5 (e.g., two of the destination pods). In particular, the first ingress rule allows traffic having a source IP address of pod3 (e.g., 192.168.0.12), a destination IP address ofpods1 or5 (e.g., 192.168.0.10 or 192.168.0.14), and a destination port number443 (e.g., and any source port number). The second ingress rule may permit ingress traffic frompolicy group1 topolicy group3 onTCP port 8443, wherepolicy group1 includes pod3 (e.g., the source pod) andpolicy group3 includes pod2 (e.g., one of the destination pods). In particular, the second ingress rule allows traffic having a source IP address of pod3 (e.g., 192.168.0.12), a destination IP address of pod2 (e.g., 192.168.0.11), and a destination port number 8443 (e.g., and any source port number).
Although not illustrated by the provided example, in some cases atblock406 inFIG.4A, only one destination pod is determined for ingress traffic or multiple destination pods are determined for ingress traffic, but none of the pods have containers associated with ports which share a same port name. As such, atblock408,workflow400 determines that at least two of the destination pods are not associated with a same port name and instead proceeds to block410, where an IP-set group for each of the destination pod(s) associated with a unique port name is created.
Similar logic used inworkflow400 for determining ingress rules may be used inworkflow500 ofFIG.5A for determining egress rules.
As illustrated inFIG.5A, similar toworkflow400,workflow500 begins atblock502, by identifying an existing network policy that is to be applied to one or more created pods. The network policy identified atblock502 may be network policy “named-port-policy” illustrated inFIG.2B.
Atblock504,workflow500 continues by determining source pod(s) for egress traffic based on the network policy's “appliedTo” field. For example, as illustrated at506 inFIG.5B, source pods for egress traffic identified by the network policy are pods with the label “env: prod.” Permanifest200,pods1,2, and5 have an environment defined as a production environment (e.g., env: prod). As such,pods1,2, and5 are the only source pods for egress traffic identified by the network policy.
Atblock506,workflow500 continues by determining destination pod(s) for egress traffic based on the network policy's (1) “podSelector” sub-field and the (2) “ports” sub-field within the “rules” field for “direction: out” traffic. For example, as illustrated at506 inFIG.5B, destination pods for egress traffic identified by the network policy are pods with the label “env: db” and which have a container associated with a port with port name “secured.” Permanifest200, onlypods4 and6 have an environment defined as a database environment (e.g., env: db) and have a container associated with a port with port name “secured.” As such,pods4 and6 are the only destinations pods identified by the network policy for the egress rules. This means that traffic frompods1,2, and5 is the only traffic permitted intopods4 and6, where that traffic has a destination port number assigned a port name “secured” per the policy.
Atblock508,workflow500 continues by determining whether at least two of the destination pods have a container associated with a port with a same port name. In this example, the fourth port of the fourth container ofpod4 and the sixth port of sixth container ofpod6 share a same port name “secured.” Accordingly, atblock508, it is determined that at least two of the destination pods have containers associated with ports with a same port name.
Accordingly, atblock512,workflow500 continues by creating an IP-set group for each of the destination pods having containers associated with a same port name and a same port number. Further, atblock514,workflow500 continues by creating an IP-set group for each of the destination pods having containers associated with a unique port name. For example, although the fourth port and the sixth port share a common port name, the ports are not the same (e.g., the port numbers are not the same). In particular, the fourth port is port number “8080,” and the sixth port is port number “44.” As such, atblocks512 and514, two IP-set groups are created. A first IP-set group (e.g., shown aspolicy group5 inFIG.5C) may include the fourth port (e.g., port number “8080” with port name “secured”), while a second IP-set group (e.g., shown aspolicy group6 inFIG.5C) may include only the fourth port (e.g., port number “44” with port name “secured”).
Accordingly, as shown inFIG.5C, when the network policy is applied, two egress rules may be created. The first egress rule may permit egress traffic frompolicy group4 topolicy group5 onTCP port 8080, wherepolicy group4 includespods1,2, and5 (e.g., the source pods) andpolicy group5 includes pod4 (e.g., one of the destination pods). The second ingress rule may permit egress traffic frompolicy group4 topolicy group6 onTCP port 44, wherepolicy group4 includespods1,2, and5 (e.g., the source pods) andpolicy group6 includes pod6 (e.g., one of the destination pods).
Although not illustrated by the provided example, in some cases atblock506 inFIG.5A, only one destination pod is determined for egress traffic or multiple destination pods are determined for egress traffic but none of the pods have containers associated with ports which share a same port name. As such, atblock508,workflow500 determines that at least two of the destination pods are not associated with a same port name and instead proceeds to block510, where an IP-set group for each of the destination pod(s) associated with a unique port name is created.
It should be understood that, for any process described herein, there may be additional or fewer steps performed in similar or alternative orders, or in parallel, within the scope of the various embodiments, consistent with the teachings herein, unless otherwise stated.
The various embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities-usually, though not necessarily, these quantities may take the form of electrical or magnetic signals, where they or representations of them are capable of being stored, transferred, combined, compared, or otherwise manipulated. Further, such manipulations are often referred to in terms, such as producing, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments may be useful machine operations. In addition, one or more embodiments also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for specific required purposes, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The various embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
One or more embodiments may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system-computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a computer readable medium include a hard drive, network attached storage (NAS), read-only memory, random-access memory (e.g., a flash memory device), a CD (Compact Discs)—CD-ROM, a CD-R, or a CD-RW, a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although one or more embodiments have been described in some detail for clarity of understanding, it will be apparent that certain changes and modifications may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein, but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.
Virtualization systems in accordance with the various embodiments may be implemented as hosted embodiments, non-hosted embodiments or as embodiments that tend to blur distinctions between the two, are all envisioned. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data.
Certain embodiments as described above involve a hardware abstraction layer on top of a host computer. The hardware abstraction layer allows multiple contexts to share the hardware resource. In one embodiment, these contexts are isolated from each other, each having at least a user application running therein. The hardware abstraction layer thus provides benefits of resource isolation and allocation among the contexts. In the foregoing embodiments, virtual machines are used as an example for the contexts and hypervisors as an example for the hardware abstraction layer. As described above, each virtual machine includes a guest operating system in which at least one application runs. It should be noted that these embodiments may also apply to other examples of contexts, such as containers not including a guest operating system, referred to herein as “OS-less containers” (see, e.g., www.docker.com). OS-less containers implement operating system-level virtualization, wherein an abstraction layer is provided on top of the kernel of an operating system on a host computer. The abstraction layer supports multiple OS-less containers each including an application and its dependencies. Each OS-less container runs as an isolated process in user space on the host operating system and shares the kernel with other containers. The OS-less container relies on the kernel's functionality to make use of resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces and to completely isolate the application's view of the operating environments. By using OS-less containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O. The term “virtualized computing instance” as used herein is meant to encompass both VMs and OS-less containers.
Many variations, modifications, additions, and improvements are possible, regardless the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest operating system that performs virtualization functions. Plural instances may be provided for components, operations or structures described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the disclosure. In general, structures and functionality presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claim(s).