Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

NATS Operator for Kubernetes (deprecated)

License

NotificationsYou must be signed in to change notification settings

nats-io/nats-operator

Repository files navigation

⚠️ The recommended way of running NATS on Kubernetes is by using theHelm charts. If looking forJetStream support, this is supported in theHelm charts. The NATS Operator is not recommended to be used for new deployments.

License Apache 2.0Build StatusVersion

NATS Operator manages NATS clusters atopKubernetes usingCRDs. If looking to run NATS on K8S without the operator you can also findHelm charts in the nats-io/k8s repo. You can also find more info about running NATS on Kubernetes in thedocs as well as a minimal setup usingStatefulSets only without using the operator to get startedhere.

Requirements

Introduction

NATS Operator provides aNatsClusterCustom Resources Definition (CRD) that models a NATS cluster.This CRD allows for specifying the desired size and version for a NATS cluster, as well as several other advanced options:

apiVersion:nats.io/v1alpha2kind:NatsClustermetadata:name:example-nats-clusterspec:size:3version:"2.1.8"

NATS Operator monitors creation/modification/deletion ofNatsCluster resources and reacts by attempting to perform the any necessary operations on the associated NATS clusters in order to align their current status with the desired one.

Installing

NATS Operator supports two different operation modes:

  • Namespace-scoped (classic): NATS Operator managesNatsCluster resources on the Kubernetes namespace where it is deployed.
  • Cluster-scoped (experimental): NATS Operator managesNatsCluster resources across all namespaces in the Kubernetes cluster.

The operation mode must be chosen when installing NATS Operator and cannot be changed later.

Namespace-scoped installation

To perform a namespace-scoped installation of NATS Operator in the Kubernetes cluster pointed at by the current context, you may run:

$kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml$kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml

This will, by default, install NATS Operator in thedefault namespace and observeNatsCluster resources created in thedefault namespace, alone.In order to install in a different namespace, you must first create said namespace and edit the manifests above in order to specify its name wherever necessary.

WARNING: To perform multiple namespace-scoped installations of NATS Operator, you must manually edit thenats-operator-binding cluster role binding indeploy/00-prereqs.yaml file in order to add all the required service accounts.Failing to do so may cause all NATS Operator instances to malfunction.

WARNING: When performing a namespace-scoped installation of NATS Operator, you must make sure that all other namespace-scoped installations that may exist in the Kubernetes cluster share the same version.Installing different versions of NATS Operator in the same Kubernetes cluster may cause unexpected behavior as the schema of the CRDs which NATS Operator registers may change between versions.

Alternatively, you may useHelm to perform a namespace-scoped installation of NATS Operator.To do so you may go tohelm/nats-operator and use the Helm charts found in that repo.

Cluster-scoped installation (experimental)

Cluster-scoped installations of NATS Operator must live in thenats-io namespace.This namespace must be created beforehand:

$kubectl create ns nats-io

Then, you must manually edit the manifests indeployment/ in order to reference thenats-io namespace and to enable theClusterScoped feature gate in the NATS Operator deployment.

apiVersion:apps/v1kind:Deploymentmetadata:name:nats-operatornamespace:nats-iospec:(...)spec:containers:      -name:nats-operator(...)args:        -nats-operator        ---feature-gates=ClusterScoped=true(...)

Once you have done this, you may install NATS Operator by running:

$kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/00-prereqs.yaml$kubectl apply -f https://github.com/nats-io/nats-operator/releases/latest/download/10-deployment.yaml

WARNING: When performing a cluster-scoped installation of NATS Operator, you must make sure that there are no other deployments of NATS Operator in the Kubernetes cluster.If you have a previous installation of NATS Operator, you must uninstall it before performing a cluster-scoped installation of NATS Operator.

Creating a NATS cluster

Once NATS Operator has been installed, you will be able to confirm that two new CRDs have been registered in the cluster:

$kubectl get crdNAME                       CREATED ATnatsclusters.nats.io       2019-01-11T17:16:36Znatsserviceroles.nats.io   2019-01-11T17:16:40Z

To create a NATS cluster, you must create aNatsCluster resource representing the desired status of the cluster.For example, to create a 3-node NATS cluster you may run:

$cat<<EOF | kubectl create -f -apiVersion: nats.io/v1alpha2kind: NatsClustermetadata:  name: example-nats-clusterspec:  size: 3  version: "1.3.0"EOF

NATS Operator will react to the creation of such a resource by creating three NATS pods.These pods will keep being monitored (and replaced in case of failure) by NATS Operator for as long as thisNatsCluster resource exists.

Listing NATS clusters

To list all the NATS clusters:

$ kubectl get nats --all-namespacesNAMESPACE   NAME                   AGEdefault     example-nats-cluster   2m

TLS support

By using a pair of opaque secrets (one for the clients and then another for the routes),it is possible to set TLS for the communication between the clients and also for thetransport between the routes:

apiVersion:"nats.io/v1alpha2"kind:"NatsCluster"metadata:name:"nats"spec:# Number of nodes in the clustersize:3version:"1.3.0"tls:# Certificates to secure the NATS client connections:serverSecret:"nats-clients-tls"# Certificates to secure the routes.routesSecret:"nats-routes-tls"

In order for TLS to be properly established between the nodes, it isnecessary to create a wildcard certificate that matches the subdomaincreated for the service from the clients and the one for the routes.

By default, theroutesSecret has to provide the files:ca.pem,route-key.pem,route.pem,for the CA, server private and public key respectively.

$ kubectl create secret generic nats-routes-tls --from-file=ca.pem --from-file=route-key.pem --from-file=route.pem

Similarly, by default theserverSecret has to provide the files:ca.pem,server-key.pem, andserver.pemfor the CA, server private key and public key used to secure the connectionwith the clients.

$ kubectl create secret generic nats-clients-tls --from-file=ca.pem --from-file=server-key.pem --from-file=server.pem

Consider though that you may wish to independently manage the certificateauthorities for routes between clusters, to support the ability to rollbetween CAs or their intermediates.

Any filename in the below can also be an absolute path, allowing you to mounta CA bundle in a place of your choosing.

NATS also supports kubernetes.io/tls secrets (like the ones managed by cert-manager) and any secrets containing a CA, private and public keys with arbitrary names.It is possible to overwrite the default names as follows:

apiVersion:"nats.io/v1alpha2"kind:"NatsCluster"metadata:name:"nats"spec:# Number of nodes in the clustersize:3version:"1.3.0"tls:# Certificates to secure the NATS client connections:serverSecret:"nats-clients-tls"# Name of the CA in serverSecretserverSecretCAFileName:"ca.crt"# Name of the key in serverSecretserverSecretKeyFileName:"tls.key"# Name of the certificate in serverSecretserverSecretCertFileName:"tls.crt"# Certificates to secure the routes.routesSecret:"nats-routes-tls"# Name of the CA, but not from this secretroutesSecretCAFileName:"/etc/ca-bundle/routes-bundle.pem"# Name of the key in routesSecretroutesSecretKeyFileName:"tls.key"# Name of the certificate in routesSecretroutesSecretCertFileName:"tls.crt"template:spec:containers:      -name:"nats"volumeMounts:        -name:"ca-bundle"mountPath:"/etc/ca-bundle"readOnly:truevolumes:      -name:"ca-bundle"configMap:name:"our-ca-bundle"

Cert-Manager

Ifcert-manager is available in your cluster, you can easily generate TLS certificates for NATS as follows:

Create a self-signed cluster issuer (or namespace-bound issuer) to create NATS' CA certificate:

apiVersion:cert-manager.io/v1alpha2kind:ClusterIssuermetadata:name:selfsigningspec:selfSigned:{}

Create your NATS cluster's CA certificate using the newselfsigning issuer:

apiVersion:cert-manager.io/v1alpha2kind:Certificatemetadata:name:nats-caspec:secretName:nats-caduration:8736h# 1 yearrenewBefore:240h# 10 daysissuerRef:name:selfsigningkind:ClusterIssuercommonName:nats-causages:     -cert sign# workaround for odd cert-manager behaviororganization:  -Your organizationisCA:true

Create your NATS cluster issuer based on the newnats-ca CA:

apiVersion:cert-manager.io/v1alpha2kind:Issuermetadata:name:nats-caspec:ca:secretName:nats-ca

Create your NATS cluster's server certificate (assuming NATS is running in thenats-io namespace, otherwise, set thecommonName anddnsNames fields appropriately):

apiVersion:cert-manager.io/v1alpha2kind:Certificatemetadata:name:nats-server-tlsspec:secretName:nats-server-tlsduration:2160h# 90 daysrenewBefore:240h# 10 daysusages:  -signing  -key encipherment  -server authissuerRef:name:nats-cakind:Issuerorganization:  -Your organizationcommonName:nats.nats-io.svc.cluster.localdnsNames:  -nats.nats-io.svc

Create your NATS cluster's routes certificate (assuming NATS is running in thenats-io namespace, otherwise, set thecommonName anddnsNames fields appropriately):

apiVersion:cert-manager.io/v1alpha2kind:Certificatemetadata:name:nats-routes-tlsspec:secretName:nats-routes-tlsduration:2160h# 90 daysrenewBefore:240h# 10 daysusages:  -signing  -key encipherment  -server auth  -client auth# included because routes mutually verify each otherissuerRef:name:nats-cakind:Issuerorganization:  -Your organizationcommonName:"*.nats-mgmt.nats-io.svc.cluster.local"dnsNames:  -"*.nats-mgmt.nats-io.svc"

Authorization

Using ServiceAccounts

⚠️ The ServiceAccounts uses a very rudimentary approach of config reloading and watching CRDs and advanced K8S APIs that may not be available in your cluster. Instead, the decentralized JWT approach should be preferred, to learn more:https://docs.nats.io/developing-with-nats/tutorials/jwt

The NATS Operator can define permissions based on Roles by using any present ServiceAccount in a namespace.This feature requires a Kubernetes v1.12+ cluster having theTokenRequest API enabled.To try this feature usingminikube v0.30.0+, you can configure it to start as follows:

$minikube start \    --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \    --extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub \    --extra-config=apiserver.service-account-issuer=api \    --extra-config=apiserver.service-account-api-audiences=api,spire-server \    --extra-config=apiserver.authorization-mode=Node,RBAC \    --extra-config=kubelet.authentication-token-webhook=true

Please note that availability of this feature across Kubernetes offerings may vary widely.

ServiceAccounts integration can then be enabled by setting theenableServiceAccounts flag to true in theNatsCluster configuration.

apiVersion:nats.io/v1alpha2kind:NatsClustermetadata:name:example-natsspec:size:3version:"1.3.0"pod:# NOTE: Only supported in Kubernetes v1.12+.enableConfigReload:trueauth:# NOTE: Only supported in Kubernetes v1.12+ clusters having the "TokenRequest" API enabled.enableServiceAccounts:true

Permissions for aServiceAccount can be set by creating aNatsServiceRole for that account. In the example below, there aretwo accounts, one is an admin user that has more permissions.

apiVersion:v1kind:ServiceAccountmetadata:name:nats-admin-user---apiVersion:v1kind:ServiceAccountmetadata:name:nats-user---apiVersion:nats.io/v1alpha2kind:NatsServiceRolemetadata:name:nats-usernamespace:nats-io# Specifies which NATS cluster will be mapping this account.labels:nats_cluster:example-natsspec:permissions:publish:["foo.*", "foo.bar.quux"]subscribe:["foo.bar"]---apiVersion:nats.io/v1alpha2kind:NatsServiceRolemetadata:name:nats-admin-usernamespace:nats-iolabels:nats_cluster:example-natsspec:permissions:publish:[">"]subscribe:[">"]

The above will create two different Secrets which can then be mounted as volumesfor a Pod.

$ kubectl -n nats-io get secretsNAME                                       TYPE          DATA      AGE...nats-admin-user-example-nats-bound-token   Opaque        1         43mnats-user-example-nats-bound-token         Opaque        1         43m

Please note thatNatsServiceRole must be created in the same namespace asNatsCluster is running, butbound-token will be created forServiceAccountresources that can be placed in various namespaces.

An example of mounting the secret in aPod can be found below:

apiVersion:v1kind:Podmetadata:name:nats-user-podlabels:nats_cluster:example-natsspec:volumes:    -name:"token"projected:sources:        -secret:name:"nats-user-example-nats-bound-token"items:              -key:tokenpath:"token"restartPolicy:Nevercontainers:    -name:nats-opscommand:["/bin/sh"]image:"wallyqs/nats-ops:latest"tty:truestdin:truestdinOnce:truevolumeMounts:      -name:"token"mountPath:"/var/run/secrets/nats.io"

Then within thePod the token can be used to authenticate againstthe server using the created token.

$ kubectl -n nats-io attach -it nats-user-pod/go# nats-sub -s nats://nats-user:`cat /var/run/secrets/nats.io/token`@example-nats:4222 hello.worldListening on [hello.world]^C/go# nats-sub -s nats://nats-admin-user:`cat /var/run/secrets/nats.io/token`@example-nats:4222 hello.worldCan't connect: nats: authorization violation

Using a single secret with the explicit configuration.

Authorization can also be set for the server by using a secretwhere the permissions are defined in JSON:

{"users": [    {"username":"user1","password":"secret1" },    {"username":"user2","password":"secret2","permissions": {"publish": ["hello.*"],"subscribe": ["hello.world"]      }    }  ],"default_permissions": {"publish": ["SANDBOX.*"],"subscribe": ["PUBLIC.>"]  }}

Example of creating a secret to set the permissions:

kubectl create secret generic nats-clients-auth --from-file=clients-auth.json

Now when creating a NATS cluster it is possible to set the permissions asin the following example:

apiVersion:"nats.io/v1alpha2"kind:"NatsCluster"pmetadata:name:"example-nats-auth"spec:size:3version:"1.1.0"auth:# Definition in JSON of the users permissionsclientsAuthSecret:"nats-clients-auth"# How long to wait for authenticationclientsAuthTimeout:5

Configuration Reload

On Kubernetes v1.12+ clusters it is possible to enable on-the-fly reloading of configuration for the servers that are part of the cluster.This can also be combined with the authorization support, so in case the user permissions change, then the servers will reload and apply the new permissions.

apiVersion:"nats.io/v1alpha2"kind:"NatsCluster"metadata:name:"example-nats-auth"spec:size:3version:"1.1.0"pod:# Enable on-the-fly NATS Server config reload# NOTE: Only supported in Kubernetes v1.12+.enableConfigReload:true# Possible to customize version of reloader imagereloaderImage:connecteverything/nats-server-config-reloaderreloaderImageTag:"0.2.2-v1alpha2"reloaderImagePullPolicy:"IfNotPresent"auth:# Definition in JSON of the users permissionsclientsAuthSecret:"nats-clients-auth"# How long to wait for authenticationclientsAuthTimeout:5

Connecting operated NATS clusters to external NATS clusters

By using theextraRoutes field on the spec you can make the operatedNATS cluster create routes against clusters outside of Kubernetes:

apiVersion:"nats.io/v1alpha2"kind:"NatsCluster"metadata:name:"nats"spec:size:3version:"1.4.1"extraRoutes:    -route:"nats://nats-a.example.com:6222"    -route:"nats://nats-b.example.com:6222"    -route:"nats://nats-c.example.com:6222"

It is also possible to connect to another operated NATS cluster as follows:

apiVersion:"nats.io/v1alpha2"kind:"NatsCluster"metadata:name:"nats-v2-2"spec:size:3version:"1.4.1"extraRoutes:    -cluster:"nats-v2-1"

Resolvers

The operator only supports theURL() resolver, seeexample/example-super-cluster.yaml

Development

Building the Docker Image

To build thenats-operator Docker image:

$ docker build -f docker/operator/Dockerfile. -t<image:tag>

To build thenats-server-config-reloader:

$ docker build -f docker/reloader/Dockerfile. -t<image:tag>

You'll need Docker17.06.0-ce or higher.


[8]ページ先頭

©2009-2025 Movatter.jp