Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Jenkins plugin to run dynamic slaves in a Kubernetes/Docker environment

License

NotificationsYou must be signed in to change notification settings

yongwen/kubernetes-plugin

 
 

Repository files navigation

Jenkins plugin to run dynamic agents in a Kubernetes cluster.

Based on theScaling Docker with Kubernetes article,automates the scaling of Jenkins agents running in Kubernetes.

The plugin creates a Kubernetes Pod for each agent started,defined by the Docker image to run, and stops it after each build.

Agents are launched using JNLP, so it is expected that the image connects automatically to the Jenkins master.For that some environment variables are automatically injected:

  • JENKINS_URL: Jenkins web interface url
  • JENKINS_SECRET: the secret key for authentication
  • JENKINS_NAME: the name of the Jenkins agent

Tested withjenkins/jnlp-slave,see theDocker image source code.

Kubernetes Cloud Configuration

In Jenkins settings click on add cloud, selectKubernetes and fill the information, likeName,Kubernetes URL,Kubernetes server certificate key, ...

IfKubernetes URL is not set, the connection options will be autoconfigured from service account or kube config file.

Pipeline support

Nodes can be defined in a pipeline and then used, however, default execution always goes to the jnlp container. You will need to specify the container you want to execute your task in.

This will run in jnlp container

// this guarantees the node will use this templatedef label="mypod-${UUID.randomUUID().toString()}"podTemplate(label: label) {    node(label) {        stage('Run shell') {            sh'echo hello world'        }    }}

This will be container specific

def label="mypod-${UUID.randomUUID().toString()}"podTemplate(label: label) {  node(label) {    stage('Run shell') {      container('mycontainer') {        sh'echo hello world'      }    }  }}

Find more examples in theexamples dir.

The default jnlp agent image used can be customized by adding it to the template

containerTemplate(name:'jnlp',image:'jenkins/jnlp-slave:3.10-1-alpine',args:'${computer.jnlpmac} ${computer.name}'),

Container Group Support

Multiple containers can be defined for the agent pod, with shared resources, like mounts. Ports in each container can be accessed as in any Kubernetes pod, by usinglocalhost.

Thecontainer statement allows to execute commands directly into each container. This feature is consideredALPHA as there are still some problems with concurrent execution and pipeline resumption

def label="mypod-${UUID.randomUUID().toString()}"podTemplate(label: label,containers: [    containerTemplate(name:'maven',image:'maven:3.3.9-jdk-8-alpine',ttyEnabled:true,command:'cat'),    containerTemplate(name:'golang',image:'golang:1.8.0',ttyEnabled:true,command:'cat')  ]) {    node(label) {        stage('Get a Maven project') {            git'https://github.com/jenkinsci/kubernetes-plugin.git'            container('maven') {                stage('Build a Maven project') {                    sh'mvn -B clean install'                }            }        }        stage('Get a Golang project') {            giturl:'https://github.com/hashicorp/terraform.git'            container('golang') {                stage('Build a Go project') {                    sh"""                    mkdir -p /go/src/github.com/hashicorp                    ln -s `pwd` /go/src/github.com/hashicorp/terraform                    cd /go/src/github.com/hashicorp/terraform && make core-dev"""                }            }        }    }}

Pod and container template configuration

ThepodTemplate is a template of a pod that will be used to create agents. It can be either configured via the user interface, or via pipeline.Either way it provides access to the following fields:

  • cloud The name of the cloud as defined in Jenkins settings. Defaults tokubernetes
  • name The name of the pod.
  • namespace The namespace of the pod.
  • label The label of the pod. Set a unique value to avoid conflicts across builds
  • containers The container templates that are use to create the containers of the pod(see below).
  • serviceAccount The service account of the pod.
  • nodeSelector The node selector of the pod.
  • nodeUsageMode Either 'NORMAL' or 'EXCLUSIVE', this controls whether Jenkins only schedules jobs with label expressions matching or use the node as much as possible.
  • volumes Volumes that are defined for the pod and are mounted byALL containers.
  • envVars Environment variables that are applied toALL containers.
    • envVar An environment variable whose value is defined inline.
    • secretEnvVar An environment variable whose value is derived from a Kubernetes secret.
  • imagePullSecrets List of pull secret names
  • annotations Annotations to apply to the pod.
  • inheritFrom List of one or more pod templates to inherit from(more details below).
  • slaveConnectTimeout Timeout in seconds for an agent to be online.
  • activeDeadlineSeconds Pod is deleted after this deadline is passed.

ThecontainerTemplate is a template of container that will be added to the pod. Again, its configurable via the user interface or via pipeline and allows you to set the following fields:

  • name The name of the container.
  • image The image of the container.
  • envVars Environment variables that are applied to the container(supplementing and overriding env vars that are set on pod level).
    • envVar An environment variable whose value is defined inline.
    • secretEnvVar An environment variable whose value is derived from a Kubernetes secret.
  • command The command the container will execute.
  • args The arguments passed to the command.
  • ttyEnabled Flag to mark that tty should be enabled.
  • livenessProbe Parameters to be added to a exec liveness probe in the container (does not suppot httpGet liveness probes)
  • ports Expose ports on the container.

Liveness Probe Usage

containerTemplate(name:'busybox',image:'busybox',ttyEnabled:true,command:'cat',livenessProbe: containerLivenessProbe(execArgs:'some --command',initialDelaySeconds:30,timeoutSeconds:1,failureThreshold:3,periodSeconds:10,successThreshold:1))

SeeDefining a liveness command for more details.

Pod template inheritance

A podTemplate may or may not inherit from an existing template. This means that the podTemplate will inherit node selector, service account, image pull secrets, containerTemplates and volumes from the template it inheritsFrom.

Service account andNode selector when are overridden completely substitute any possible value found on the 'parent'.

Container templates that are added to the podTemplate, that has a matching containerTemplate (a containerTemplate with the same name) in the 'parent' template, will inherit the configuration of the parent containerTemplate.If no matching containerTemplate is found, the template is added as is.

Volume inheritance works exactly asContainer templates.

Image Pull Secrets are combined (all secrets defined both on 'parent' and 'current' template are used).

In the example below, we will inherit the podTemplate we created previously, and will just override the version of 'maven' so that it uses jdk-7 instead:

podTemplate(label:'anotherpod',inheritFrom:'mypod'containers: [    containerTemplate(name:'maven',image:'maven:3.3.9-jdk-7-alpine')  ]) {//Let's not repeat ourselves and ommit this part}

Note that we only need to specify the things that are different. So,ttyEnabled andcommand are not specified, as they are inherited. Also thegolang container will be added as is defined in the 'parent' template.

Multiple Pod template inheritance

FieldinheritFrom may refer a single podTemplate or multiple separated by space. In the later case each template will be processed in the order they appear in the list(later items overriding earlier ones).In any case if the referenced template is not found it will be ignored.

Nesting Pod templates

FieldinheritFrom provides an easy way to compose podTemplates that have been pre-configured. In many cases it would be useful to define and compose podTemplates directly in the pipeline using groovy.This is made possible via nesting. You can nest multiple pod templates together in order to compose a single one.

The example below composes two different podTemplates in order to create one with maven and docker capabilities.

podTemplate(label: 'docker', containers: [containerTemplate(image: 'docker', name: 'docker', command: 'cat', ttyEnabled: true)]) {    podTemplate(label: 'maven', containers: [containerTemplate(image: 'maven', name: 'maven', command: 'cat', ttyEnabled: true)]) {        // do stuff    }}

This feature is extra useful, pipeline library developers as it allows you to wrap podTemplates into functions and let users, nest those functions according to their needs.

For example one could create functions for their podTemplates and import them for use.Say heres our filesrc/com/foo/utils/PodTemplates.groovy:

packagecom.foo.utilspublicvoiddockerTemplate(body) {  podTemplate(label: label,containers: [containerTemplate(name:'docker',image:'docker',command:'cat',ttyEnabled:true)],volumes: [hostPathVolume(hostPath:'/var/run/docker.sock',mountPath:'/var/run/docker.sock')]) {    body()}}publicvoidmavenTemplate(body) {  podTemplate(label: label,containers: [containerTemplate(name:'maven',image:'maven',command:'cat',ttyEnabled:true)],volumes: [secretVolume(secretName:'maven-settings',mountPath:'/root/.m2'),                  persistentVolumeClaim(claimName:'maven-local-repo',mountPath:'/root/.m2nrepo')]) {    body()}}returnthis

Then consumers of the library could just express the need for a maven pod with docker capabilities by combining the two, however once again, you will need to express the specific container you wish to execute commands in. You canNOT omit thenode statement.

importcom.foo.utils.PodTemplatesslaveTemplates=newPodTemplates()slaveTemplates.dockerTemplate {  slaveTemplates.mavenTemplate {    node('label') {      container('docker') {        sh'echo hello from docker'      }      container('maven') {        sh'echo hello from maven'      }     }  }}

Using a different namespace

There might be cases, where you need to have the agent pod run inside a different namespace than the one configured with the cloud definition.For example you may need the agent to run inside anephemeral namespace for the sake of testing.For those cases you can explicitly configure a namespace either using the ui or the pipeline.

Specifying a different shell command other than /bin/sh

By default, the shell command is /bin/sh. In some case, you would like to use another shell command like /bin/bash.

podTemplate(label: my-label) {  node(my-label) {    stage('Run specific shell') {      container(name:'mycontainer',shell:'/bin/bash') {        sh'echo hello world'      }    }  }}

Container Configuration

When configuring a container in a pipeline podTemplate the following options are available:

podTemplate(label:'mypod',cloud:'kubernetes',containers: [    containerTemplate(name:'mariadb',image:'mariadb:10.1',ttyEnabled:true,privileged:false,alwaysPullImage:false,workingDir:'/home/jenkins',resourceRequestCpu:'50m',resourceLimitCpu:'100m',resourceRequestMemory:'100Mi',resourceLimitMemory:'200Mi',envVars: [            envVar(key:'MYSQL_ALLOW_EMPTY_PASSWORD',value:'true'),            secretEnvVar(key:'MYSQL_PASSWORD',secretName:'mysql-secret',secretKey:'password'),...        ],ports: [portMapping(name:'mysql',containerPort:3306,hostPort:3306)]    ),...],volumes: [    emptyDirVolume(mountPath:'/etc/mount1',memory:false),    secretVolume(mountPath:'/etc/mount2',secretName:'my-secret'),    configMapVolume(mountPath:'/etc/mount3',configMapName:'my-config'),    hostPathVolume(mountPath:'/etc/mount4',hostPath:'/mnt/my-mount'),    nfsVolume(mountPath:'/etc/mount5',serverAddress:'127.0.0.1',serverPath:'/',readOnly:true),    persistentVolumeClaim(mountPath:'/etc/mount6',claimName:'myClaim',readOnly:true)],imagePullSecrets: ['pull-secret' ],annotations: [    podAnnotation(key:"my-key",value:"my-value")...]) {...}

Declarative Pipeline

Declarative Pipeline support requires Jenkins 2.66+

Example atexamples/declarative.groovy

Accessing container logs from the pipeline

If you use thecontainerTemplate to run some service in the background(e.g. a database for your integration tests), you might want to access its log from the pipeline.This can be done with thecontainerLog step, which prints the log of therequested container to the build log.

Required Parameters

  • name the name of the container to get logs from, as defined inpodTemplate. Parameter namecan be ommited in simple usage:
containerLog'mongodb'

Optional Parameters

  • returnLog return the log instead of printing it to the build log (default:false)
  • tailingLines only return the last n lines of the log (optional)
  • sinceSeconds only return the last n seconds of the log (optional)
  • limitBytes limit output to n bytes (from the beginning of the log, not exact).

Also see the online help andexamples/containerLog.groovy.

Constraints

Multiple containers can be defined in a pod.One of them is automatically created with namejnlp, and runs the Jenkins JNLP agent service, with args${computer.jnlpmac} ${computer.name},and will be the container acting as Jenkins agent.

Other containers must run a long running process, so the container does not exit. If the default entrypoint or commandjust runs something and exit then it should be overridden with something likecat withttyEnabled: true.

WARNINGIf you want to provide your own Docker image for the JNLP slave, youmust name the containerjnlp so it overrides the default one. Failing to do so will result in two slaves trying to concurrently connect to the master.

Over provisioning flags

By default, Jenkins spawns agents conservatively. Say, if there are 2 builds in queue, it won't spawn 2 executors immediately.It will spawn one executor and wait for sometime for the first executor to be freed before deciding to spawn the second executor.Jenkins makes sure every executor it spawns is utilized to the maximum.If you want to override this behaviour and spawn an executor for each build in queue immediately without waiting,you can use these flags during Jenkins startup:

-Dhudson.slaves.NodeProvisioner.initialDelay=0-Dhudson.slaves.NodeProvisioner.MARGIN=50-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85

Configuration on minikube

Create and startminikube

The client certificate needs to be converted to PKCS, will need a password

openssl pkcs12 -export -out ~/.minikube/minikube.pfx -inkey ~/.minikube/apiserver.key -in ~/.minikube/apiserver.crt -certfile ~/.minikube/ca.crt -passout pass:secret

Validate that the certificates work

curl --cacert ~/.minikube/ca.crt --cert ~/.minikube/minikube.pfx:secret https://$(minikube ip):8443

Add a Jenkins credential of type certificate, upload it from~/.minikube/minikube.pfx, passwordsecret

FillKubernetes server certificate key with the contents of~/.minikube/ca.crt

Configuration on Google Container Engine

Create a cluster

gcloud container clusters create jenkins --num-nodes 1 --machine-type g1-small

and note the admin password and server certitifate.

Or use Google Developer Console to create a Container Engine cluster, then run

gcloud container clusters get-credentials jenkinskubectl config view --raw

the last command will output kubernetes cluster configuration including API server URL, admin password and root certificate

Debugging

First watch if the Jenkins agent pods are started.Make sure you are in the correct cluster and namespace.

kubectl get -a pods --watch

If they are in a different state thanRunning, usedescribe to get the events

kubectl describe pods/my-jenkins-agent

If they areRunning, uselogs to get the log output

kubectl logs -f pods/my-jenkins-agent jnlp

If pods are not started or for any other error, check the logs on the master side.

For more detail, configure a newJenkins log recorder fororg.csanchez.jenkins.plugins.kubernetes atALL level.

To inspect the json messages sent back and forth to the Kubernetes API server you can configurea newJenkins log recorder forokhttp3atDEBUG level.

Deleting pods in bad state

kubectl get -a pods -o name --selector=jenkins=agent | xargs -I {} kubectl delete {}

Building and Testing

Integration tests will use the currently configured context autodetected from kube config file or service account.

Manual Testing

Runmvn clean install and copytarget/kubernetes.hpi to Jenkins plugins folder.

Running Kubernetes Integration Tests

Please note that the system you runmvn on needs to be reachable from the cluster.If you see the agents happen to connect to the wrong host, see you can usejenkins.host.address as mentioned above.

Integration Tests with Minikube

For integration tests install and startminikube.Tests will detect it and run a set of integration tests in a new namespace.

Some integration tests run a local jenkins, so the host that runs them needsto be accessible from the kubernetes cluster.By default Jenkins will listen on192.168.64.1 interface only, for security reasons.If your minikube is not running in that network, passconnectorHost to maven, ie.

mvn clean install -DconnectorHost=$(minikube ip | sed -e 's/\([0-9]*\.[0-9]*\.[0-9]*\).*/\1.1/')

If you don't mind others in your network being able to use your test jenkins you could just use this:

mvn clean install -DconnectorHost=0.0.0.0

Then your test jenkins will listen on all ip addresses so that the build pods will be able to connect from the pods in your minikube VM to your host.

If your minikube is running in a VM (e.g. on virtualbox) and the host runningmvndoes not have a public hostname for the VM to access, you can set thejenkins.host.addresssystem property to the (host-only or NAT) IP of your host:

mvn clean install -Djenkins.host.address=192.168.99.1

Integration Tests in a Different Cluster

Ensure you create the namespaces and roles with the following commands, then run the testsin namespacekubernetes-plugin with the service accountjenkins(editsrc/test/kubernetes/service-account.yml to use a different service account)

kubectl create namespace kubernetes-plugin-testkubectl create namespace kubernetes-plugin-test-overridden-namespacekubectl create namespace kubernetes-plugin-test-overridden-namespace2kubectl apply -n kubernetes-plugin-test -f src/main/kubernetes/service-account.ymlkubectl apply -n kubernetes-plugin-test-overridden-namespace -f src/main/kubernetes/service-account.ymlkubectl apply -n kubernetes-plugin-test-overridden-namespace2 -f src/main/kubernetes/service-account.ymlkubectl apply -n kubernetes-plugin-test -f src/test/kubernetes/service-account.ymlkubectl apply -n kubernetes-plugin-test-overridden-namespace -f src/test/kubernetes/service-account.ymlkubectl apply -n kubernetes-plugin-test-overridden-namespace2 -f src/test/kubernetes/service-account.yml

Docker image

Docker image for Jenkins, with plugin installed.Based on theofficial image.

Running the Docker image

docker run --rm --name jenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home csanchez/jenkins-kubernetes

Running in Kubernetes

The example configuration will create a stateful set running Jenkins with persistent volumeand using a service account to authenticate to Kubernetes API.

Running locally with minikube

A local testing cluster with one node can be created withminikube

minikube start

You may need to set the correct permissions for host mounted volumes

minikube sshsudo chown 1000:1000 /tmp/hostpath-provisioner/pvc-*

Then create the Jenkins namespace, controller and Service with

kubectl create namespace kubernetes-pluginkubectl config set-context $(kubectl config current-context) --namespace=kubernetes-pluginkubectl create -f src/main/kubernetes/service-account.ymlkubectl create -f src/main/kubernetes/jenkins.yml

Get the url to connect to with

minikube service jenkins --namespace kubernetes-plugin --url

Running in Google Container Engine GKE

Assuming you created a Kubernetes cluster namedjenkins this is how to run both Jenkins and agents there.

Creating all the elements and setting the default namespace

kubectl create namespace kubernetes-pluginkubectl config set-context $(kubectl config current-context) --namespace=kubernetes-pluginkubectl create -f src/main/kubernetes/service-account.ymlkubectl create -f src/main/kubernetes/jenkins.yml

Connect to the ip of the network load balancer created by Kubernetes, port 80.Get the ip (in this case104.197.19.100) withkubectl describe services/jenkins(it may take a bit to populate)

$ kubectl describe services/jenkinsName:           jenkinsNamespace:      defaultLabels:         <none>Selector:       name=jenkinsType:           LoadBalancerIP:         10.175.244.232LoadBalancer Ingress:   104.197.19.100Port:           http    80/TCPNodePort:       http    30080/TCPEndpoints:      10.172.1.5:8080Port:           agent   50000/TCPNodePort:       agent   32081/TCPEndpoints:      10.172.1.5:50000Session Affinity:   NoneNo events.

Until Kubernetes 1.4 removes the SNATing of source ips, seems that CSRF (enabled by default in Jenkins 2)needs to be configured to avoidWARNING: No valid crumb was included in request errors.This can be done checkingEnable proxy compatibility under Manage Jenkins -> Configure Global Security

Configure Jenkins, adding theKubernetes cloud under configuration, settingKubernetes URL to the container engine cluster endpoint or simplyhttps://kubernetes.default.svc.cluster.local.Under credentials, clickAdd and selectKubernetes Service Account,or alternatively use the Kubernetes API username and password. Select 'Certificate' as credentials type if thekubernetes cluster is configured to use client certificates for authentication.

UsingKubernetes Service Account will cause the plugin to use the default token mounted inside the Jenkins pod. SeeConfigure Service Accounts for Pods for more information.

image

You may want to setJenkins URL to the internal service IP,http://10.175.244.232 in this case,to connect through the internal network.

SetContainer Cap to a reasonable number for tests, i.e. 3.

Add an image with

  • Docker image:jenkins/jnlp-slave
  • Jenkins agent root directory:/home/jenkins

image

Now it is ready to be used.

Tearing it down

kubectl delete namespace/kubernetes-plugin

Customizing the deployment

Modify CPUs and memory request/limits (Kubernetes Resource API)

Modify file./src/main/kubernetes/jenkins.yml with desired limits

resources:limits:cpu:1memory:1Girequests:cpu:0.5memory:500Mi

Note: the JVM will use the memoryrequests as the heap limit (-Xmx)

Building

docker build -t csanchez/jenkins-kubernetes .

Related Projects

About

Jenkins plugin to run dynamic slaves in a Kubernetes/Docker environment

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java92.3%
  • Groovy4.4%
  • HTML3.3%

[8]ページ先頭

©2009-2025 Movatter.jp