- Notifications
You must be signed in to change notification settings - Fork31
License
metal-stack/csi-driver-lvm
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
CSI DRIVER LVM utilizes local storage of Kubernetes nodes to provide persistent storage for pods.
It automatically creates hostPath based persistent volumes on the nodes.
Underneath it creates a LVM logical volume on the local disks. A comma-separated list of grok pattern, which disks to use must be specified.
This CSI driver is derived fromcsi-driver-host-path andcsi-lvm
For the special case of block volumes, the filesystem-expansion has to be performed by the app using the block device
The persistent volumes created by this CSI driver are strictly node-affine to the node on which the pod was scheduled. This is intentional and prevents pods from starting without the LV data, which resides only on the specific node in the Kubernetes cluster.
Consequently, if a pod is evicted (potentially due to cluster autoscaling or updates to the worker node), the pod may become stuck. In certain scenarios, it's acceptable for the pod to start on another node, despite the potential for data loss. The csi-driver-lvm-controller can capture these events and automatically delete the PVC without requiring manual intervention by an operator.
To use this functionality, the following is needed:
- This only works on
StatefulSets with volumeClaimTemplates and volume references to thecsi-driver-lvmstorage class - In addition to that, the
PodorPersistentVolumeClaimmanaged by theStatefulSetneeds the annotation:metal-stack.io/csi-driver-lvm.is-eviction-allowed: true
Helm charts for installation are located in a separate repository calledhelm-charts. If you would like to contribute to the helm chart, please raise an issue or pull request there.
You have to set the devicePattern for your hardware to specify which disks should be used to create the volume group.
helm install --repo https://helm.metal-stack.io mytest csi-driver-lvm --set lvm.devicePattern='/dev/nvme[0-9]n[0-9]'Now you can use one of following storageClasses:
csi-driver-lvm-linearcsi-driver-lvm-mirrorcsi-driver-lvm-striped
To get the previous old and now deprecatedcsi-lvm-sc-linear, ... storageclasses, set helm-chart valuecompat03x=true.
If you want to migrate your existing PVC to / from csi-driver-lvm, you can usekorb.
- implement CreateSnapshot(), ListSnapshots(), DeleteSnapshot()
kubectl apply -f examples/csi-pvc-raw.yamlkubectl apply -f examples/csi-pod-raw.yamlkubectl apply -f examples/csi-pvc.yamlkubectl apply -f examples/csi-app.yamlkubectl delete -f examples/csi-pod-raw.yamlkubectl delete -f examples/csi-pvc-raw.yamlkubectl delete -f examples/csi-app.yamlkubectl delete -f examples/csi-pvc.yaml
In order to run the integration tests locally, you need to create to loop devices on your host machine. Make sure the loop device mount paths are not used on your system (default path is/dev/loop10{0,1}).
You can create these loop devices like this:
foriin 100 101;do fallocate -l 1G loop${i}.img; sudo losetup /dev/loop${i} loop${i}.img;donesudo losetup -a# https://github.com/util-linux/util-linux/issues/3197# use this for recreation or cleanup# for i in 100 101; do sudo losetup -d /dev/loop${i}; rm -f loop${i}.img; done
You can then run the tests against a kind cluster, running:
maketestTo recreate or cleanup the kind cluster:
make test-cleanup
About
Topics
Resources
License
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.