- Notifications
You must be signed in to change notification settings - Fork1
m99coder/postgres-on-kubernetes
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
PostgreSQL on Kubernetes
$# install minikube to create a single-node cluster$ brew install minikube$# start cluster using VMs$ minikube start --vm=true$# create a custom namespace and context$ kubectl create namespace postgres$ kubectl config set-context postgres --namespace postgres --cluster minikube --user minikube$ kubectl config use-context postgres
In order to persist the data stored in PostgreSQL it’s necessary to createPersistent Volumes that have a pod-independent lifecycle. Within a Stateful Set a so calledPersistent Volume Claim with a specificStorage Class can be configured.
There are two ways to create Persistent Volumes. Either you manually create a volume per replica of PostgreSQL or you configuredynamic provisioning. For simplicity we choose the manual approach first.
$# create 3 persistent volumes$ kubectl apply -f pv-0.yaml$ kubectl apply -f pv-1.yaml$ kubectl apply -f pv-2.yaml$# list persistent volumes$ kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv-postgresql-0 1Gi RWO Retain Available default 9spv-postgresql-1 1Gi RWO Retain Available default 7spv-postgresql-2 1Gi RWO Retain Available default 3s
AHeadless Service is specified withclusterIP: None
and omits to use a L4 load balancer. By also defining a selector, the endpoints controller createsEndpoint
records and also modifies the DNS configuration so thatA
records are returned that point directly to the pods.
$# create headless service$ kubectl apply -f svc.yaml$# describe headless service$ kubectl describe svc postgresql-svcName: postgresql-svcNamespace: postgresLabels: sfs=postgresql-sfsAnnotations:<none>Selector: sfs=postgresql-sfsType: ClusterIPIP Family Policy: SingleStackIP Families: IPv4IP: NoneIPs: NonePort: postgresql-port 5432/TCPTargetPort: 5432/TCPEndpoints:<none>Session Affinity: NoneEvents:<none>
PostgreSQL uses environment variables for configuration. The most important one for the officialPostgreSQL Docker image isPOSTGRES_PASSWORD
. We utilizeSecrets to inject the respective value into the container later on.
$# create secret from literal$ kubectl create secret generic postgresql-secrets \ --from-literal=POSTGRES_PASSWORD=tes6Aev8$# describe secret$ kubectl describe secrets postgresql-secretsName: postgresql-secretsNamespace: postgresLabels:<none>Annotations:<none>Type: OpaqueData====POSTGRES_PASSWORD: 8 bytes
AStateful Set is similar to aReplica Set in a sense that it also handles pods for the configured number of replicas. In contrast to aReplica Set, it maintains a sticky identity for each of them. This means they are created in a fixed, sequential order and deleted counterwise. Their network identity is stable as well, what enables us to reference them by the automatically assigned DNS host inside of the cluster.
$# create stateful set with 3 replicas$ kubectl apply -f sfs.yaml$# list stateful sets$ kubectl get statefulsetsNAME READY AGEpostgresql-sfs 3/3 16s$# list pods$ kubectl get podsNAME READY STATUS RESTARTS AGEpostgresql-sfs-0 1/1 Running 0 86spostgresql-sfs-1 1/1 Running 0 83spostgresql-sfs-2 1/1 Running 0 80s$# inspect logs of a random pod$ kubectl logs postgresql-sfs-0PostgreSQL Database directory appears to contain a database; Skipping initialization2021-08-04 08:19:50.832 UTC [1] LOG: starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit2021-08-04 08:19:50.832 UTC [1] LOG: listening on IPv4 address"0.0.0.0", port 54322021-08-04 08:19:50.832 UTC [1] LOG: listening on IPv6 address"::", port 54322021-08-04 08:19:50.835 UTC [1] LOG: listening on Unix socket"/var/run/postgresql/.s.PGSQL.5432"2021-08-04 08:19:50.838 UTC [26] LOG: database system was shut down at 2021-08-03 14:33:17 UTC2021-08-04 08:19:50.843 UTC [1] LOG: database system is ready to accept connections$# describe the service to see that 3 endpoints were created automatically$ kubectl describe svc postgresql-svcName: postgresql-svcNamespace: postgresLabels: sfs=postgresql-sfsAnnotations:<none>Selector: sfs=postgresql-sfsType: ClusterIPIP Family Policy: SingleStackIP Families: IPv4IP: NoneIPs: NonePort: postgresql-port 5432/TCPTargetPort: 5432/TCPEndpoints: 172.17.0.3:5432,172.17.0.4:5432,172.17.0.5:5432Session Affinity: NoneEvents:<none>
You are able to directly connect to PostgreSQL by startingbash
within a particular pod.
$ kubectlexec -it postgresql-sfs-0 -- bashroot@postgresql-sfs-0:/# PGPASSWORD=tes6Aev8 psql -U postgrespsql (13.3 (Debian 13.3-1.pgdg100+1))Type"help"for help.postgres=# exitroot@postgresql-sfs-0:/# exitexit
An alternative approach is running a temporary PostgreSQL container and using the includedpsql
to connect to one of the database instances. Where the hostname is the automatically created DNS host of the service we deployed earlier. The format of that hostname is:<service-name>.<namespace>.svc.cluster.local
and will be resolved to a random pod running a database server.
$ kubectl run -it --rm pg-psql --image=postgres:13.3 --restart=Never \ --env="PGPASSWORD=tes6Aev8" -- \ psql -h postgresql-svc.postgres.svc.cluster.local -U postgresIf you don't see a command prompt, try pressing enter.postgres=# \qpod "pg-psql" deleted
To check that the DNS hostname works we deploy a busybox instance.
$ kubectl run -it --rm busybox --image=busybox --restart=Never -- shIf you don't see a command prompt, try pressing enter./ # ping postgresql-svc.postgres.svc.cluster.localPING postgresql-svc.postgres.svc.cluster.local (172.17.0.3): 56 data bytes64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.828 ms64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.080 ms^C--- postgresql-svc.postgres.svc.cluster.local ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 0.080/0.329/0.828 ms/ # nslookup postgresql-svc.postgres.svc.cluster.localServer:10.96.0.10Address:10.96.0.10:53Name:postgresql-svc.postgres.svc.cluster.localAddress: 172.17.0.5Name:postgresql-svc.postgres.svc.cluster.localAddress: 172.17.0.4Name:postgresql-svc.postgres.svc.cluster.localAddress: 172.17.0.3*** Can't find postgresql-svc.postgres.svc.cluster.local: No answer/# exitpod"busybox" deleted
First we create aConfig Map for the following values:
DB_HOST
: DNS hostname of our PostgreSQL serviceDB_USER
: PostgreSQL database user
DB_PASSWORD
will be set using the previously created secret.POOL_MODE
is set totransaction
andSERVER_RESET_QUERY
toDISCARD ALL
by default in the respective deployment manifest.
$ kubectl create configmap pgbouncer-configs \ --from-literal=DB_HOST=postgresql-svc.postgres.svc.cluster.local \ --from-literal=DB_USER=postgres
Now we can apply our deployment forPgBouncer that is based on thisDocker image for PgBouncer 1.15.0.
$ kubectl apply -f pgbouncer.yamldeployment.apps/pgbouncer created$# now we create a service for the deployment$ kubectl expose deployment pgbouncer --name=pgbouncer-svcservice/pgbouncer exposed
Let’s check the server list.
$ kubectl run -it --rm pg-psql --image=postgres:13.3 --restart=Never \ --env="PGPASSWORD=tes6Aev8" -- \ psql -h pgbouncer-svc.postgres.svc.cluster.local -U postgres -d pgbouncerIf you don't see a command prompt, try pressing enter.pgbouncer=# \xExpanded display is on.pgbouncer=# SHOW SERVERS;-[ RECORD 1 ]+------------------------type | Suser | postgresdatabase | postgresstate | usedaddr | 172.17.0.5port | 5432local_addr | 172.17.0.6local_port | 59960connect_time | 2021-08-04 11:25:19 UTCrequest_time | 2021-08-04 11:25:59 UTCwait | 0wait_us | 0close_needed | 0ptr | 0x7fa02cb54100link |remote_pid | 183tls |pgbouncer=# \qpod "pg-psql" deleted
As we can so see PgBouncer only detects one server so far. The reason for that is, each server is listening on the same host and port. We need to fix that.
About
PostgreSQL on Kubernetes
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.