- Notifications
You must be signed in to change notification settings - Fork2
It will take backup of given topic and store that into either local filesystem or S3.It will also restore from given local filesystem or S3.
License
NotificationsYou must be signed in to change notification settings
116davinder/apache-kafka-backup-and-restore
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Production Kafka Deployment Using Ansible
General Notes
LOG_LEVELvalues can be foundhttps://docs.python.org/3/library/logging.html#logging-levels
- confluent_kafka
- boto3
- google-cloud-storage
- pendulum
- azure-storage-blob
- minio
- It will take backup of given topic and store that into either local filesystem or S3 or Azure.
- It will auto resume from same point from where it died if given consumer group name is same before and after crash.
it will uploadcurrent.binfile to s3 which contains messages uptoNUMBER_OF_MESSAGE_PER_BACKUP_FILEbut will only upload with other backup files.RETRY_UPLOAD_SECONDScontrols upload to cloud storage.NUMBER_OF_KAFKA_THREADSis used to parallelise reading from kafka topic.It should not be more than number of partitions.NUMBER_OF_MESSAGE_PER_BACKUP_FILEwill try to keep this number consistent in filebut if application got restarted then it may be vary for first back file.
- it will restore from backup dir into given topic.
RETRY_SECONDScontrols when to rereadFILESYSTEM_BACKUP_DIRfor new files.RESTORE_PARTITION_STRATEGYcontrols, in which partition it will restore messages. ifsameis mentioned then it will restore into same topic partition but ifrandomis mentioned then it will restore to all partitions randomly.
Known Issues
- NA
About
It will take backup of given topic and store that into either local filesystem or S3.It will also restore from given local filesystem or S3.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
No packages published
Uh oh!
There was an error while loading.Please reload this page.