Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Backup multiple database types on a scheduled basis with many customizable options

License

NotificationsYou must be signed in to change notification settings

tiredofit/docker-db-backup

Repository files navigation

GitHub releaseBuild StatusDocker StarsDocker PullsBecome a sponsorPaypal Donate


About

This will build a container for backing up multiple types of DB Servers

Backs up CouchDB, InfluxDB, MySQL/MariaDB, Microsoft SQL, MongoDB, Postgres, Redis servers.

  • dump to local filesystem or backup to S3 Compatible services, and Azure.
  • multiple backup job support
    • selectable when to start the first dump, whether time of day or relative to container start time
    • selectable interval
    • selectable omit scheduling during periods of time
    • selectable database user and password
    • selectable cleanup and archive capabilities
    • selectable database name support - all databases, single, or multiple databases
    • backup all to separate files or one singular file
  • checksum support choose to have an MD5 or SHA1 hash generated after backup for verification
  • compression support (none, gz, bz, xz, zstd)
  • encryption support (passphrase and public key)
  • notify upon job failure to email, matrix, mattermost, rocketchat, custom script
  • zabbix metrics support
  • hooks to execute pre and post backup job for customization purposes
  • companion script to aid in restores

Maintainer

Table of Contents

Prerequisites and Assumptions

  • You must have a working connection to one of the supported DB Servers and appropriate credentials

Installation

Build from Source

Clone this repository and build the image withdocker build <arguments> (imagename) .

Prebuilt Images

Builds of the image are available onDocker Hub

Builds of the image are also available on theGithub Container Registry

docker pull ghcr.io/tiredofit/docker-db-backup:(imagetag)

The following image tags are available along with their tagged release based on what's written in theChangelog:

Alpine BaseTag
latest:latest
docker pull docker.io/tiredofit/db-backup:(imagetag)

Multi Architecture

Images are built primarily foramd64 architecture, and may also include builds forarm/v7,arm64 and others. These variants are all unsupported. Considersponsoring my work so that I can work with various hardware. To see if this image supports multiple architectures, typedocker manifest (image):(tag)

Configuration

Quick Start

  • The quickest way to get started is usingdocker-compose. See the examples folder for a series of example compose.yml that can be modified for development or production use.

  • Set variousenvironment variables to understand the capabilities of this image.

  • Mappersistent storage for access to configuration and data files for backup.

Persistent Storage

The following directories are used for configuration and can be mapped for persistent storage.

DirectoryDescription
/backupBackups
/assets/scripts/preOptional Put custom scripts in this directory to execute before backup operations
/assets/scripts/postOptional Put custom scripts in this directory to execute after backup operations
/logsOptional Logfiles for backup jobs

Environment Variables

Base Images used

This image relies on anAlpine Linux base image that relies on aninit system for added capabilities. Outgoing SMTP capabilities are handled viamsmtp. Individual container performance monitoring is performed byzabbix-agent. Additional tools include:bash,curl,less,logrotate,nano.

Be sure to view the following repositories to understand all the customizable options:

ImageDescription
OS BaseCustomized Image based on Alpine Linux

Container Options

ParameterDescriptionDefault
MODEAUTO mode to use internal scheduling routines orMANUAL to simply use this as manual backups only executed by your own meansAUTO
USER_DBBACKUPThe uid that the image should read and write files as (username isdbbackup)10000
GROUP_DBBACKUPThe gid that the image should read and write files as (groupname isdbbackup)10000
LOG_PATHPath to log files/logs
TEMP_PATHPerform Backups and Compression in this temporary directory/tmp/backups/
MANUAL_RUN_FOREVERTRUE orFALSE if you wish to try to make the container exit after the backupTRUE
DEBUG_MODEIf set totrue, print copious shell script messages to the container log. Otherwise only basic messages are printed.FALSE
BACKUP_JOB_CONCURRENCYHow many backup jobs to run concurrently1

Job Defaults

If these are set and no other defaults or variables are set explicitly, they will be added to any of the backup jobs.

VariableDescriptionDefault
DEFAULT_BACKUP_LOCATIONBackup toFILESYSTEM,blobxfer orS3 compatible services like S3, Minio, WasabiFILESYSTEM
DEFAULT_CHECKSUMEitherMD5 orSHA1 orNONEMD5
DEFAULT_LOG_LEVELLog output on screen and in filesINFONOTICEERRORWARNDEBUGnotice
DEFAULT_RESOURCE_OPTIMIZEDPerform operations at a lower priority to the CPU and IO schedulerFALSE
DEFAULT_SKIP_AVAILABILITY_CHECKBefore backing up - skip connectivity checkFALSE
Compression Options
VariableDescriptionDefault
DEFAULT_COMPRESSIONUse either GzipGZ, Bzip2BZ, XZipXZ, ZSTDZSTD or noneNONEZSTD
DEFAULT_COMPRESSION_LEVELNumerical value of what level of compression to use, most allow1 to93
except forZSTD which allows for1 to19
DEFAULT_GZ_RSYNCABLEUse--rsyncable (gzip only) for faster rsync transfers and incremental backup deduplication.FALSE
DEFAULT_ENABLE_PARALLEL_COMPRESSIONUse multiple cores when compressing backupsTRUE orFALSETRUE
DEFAULT_PARALLEL_COMPRESSION_THREADSMaximum amount of threads to use when compressing - Integer value e.g.8autodetected
Encryption Options

Encryption occurs after compression and the encrypted filename will have a.gpg suffix

VariableDescriptionDefault_FILE
DEFAULT_ENCRYPTEncrypt file after backing up with GPGFALSE
DEFAULT_ENCRYPT_PASSPHRASEPassphrase to encrypt file with GPGx
or
DEFAULT_ENCRYPT_PUBLIC_KEYPath of public key to encrypt file with GPGx
DEFAULT_ENCRYPT_PRIVATE_KEYPath of private key to encrypt file with GPGx
Scheduling Options
VariableDescriptionDefault
DEFAULT_BACKUP_INTERVALHow often to do a backup, in minutes after the first backup. Defaults to 1440 minutes, or once per day.1440
DEFAULT_BACKUP_BEGINWhat time to do the initial backup. Defaults to immediate. (+1)+0
Must be in one of four formats:
Absolute HHMM, e.g.2330 or0415
Relative +MM, i.e. how many minutes after starting the container, e.g.+0 (immediate),+10 (in 10 minutes), or+90 in an hour and a half
Full datestamp e.g.2023-12-21 23:30:00
Cron expression e.g.30 23 * * *Understand the format -BACKUP_INTERVAL is ignored
DEFAULT_CLEANUP_TIMEValue in minutes to delete old backups (only fired when backup interval executes)FALSE
1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
DEFAULT_ARCHIVE_TIMEValue in minutes to move all files files older than (x) from
DEFAULT_BACKUP_BLACKOUT_BEGINUseHHMM notation to start a blackout period where no backups occur eg0420
DEFAULT_BACKUP_BLACKOUT_ENDUseHHMM notation to set the end period where no backups occur eg0430

You may need to wrap yourDEFAULT_BACKUP_BEGIN value in quotes for it to properly parse. There have been reports of backups that start with a0 get converted into a different format which will not allow the timer to start at the correct time.

Default Database Options
CouchDB
VariableDescriptionDefault_FILE
DEFAULT_PORTCouchDB Port5984x
InfluxDB
VariableDescriptionDefault_FILE
DEFAULT_PORTInfluxDB Portx
Version 1.x8088
Version 2.x8086
DEFAULT_INFLUX_VERSIONWhat Version of Influx are you backing up from1.x or2 series - amd64 and aarch/armv8 only for22
MariaDB/MySQL
VariableDescriptionDefault_FILE
DEFAULT_PORTMySQL / MariaDB Port3306x
DEFAULT_EXTRA_BACKUP_OPTSPass extra arguments to the backup command only, add them here e.g.--extra-command
DEFAULT_EXTRA_ENUMERATION_OPTSPass extra arguments to the database enumeration command only, add them here e.g.--extra-command
DEFAULT_EXTRA_OPTSPass extra arguments to the backup and database enumeration command, add them here e.g.--extra-command
DEFAULT_MYSQL_CLIENTChoose betweenmariadb ormysql client to perform dump operations for compatibility purposesmariadb
DEFAULT_MYSQL_EVENTSBackup EventsTRUE
DEFAULT_MYSQL_MAX_ALLOWED_PACKETMax allowed packet512M
DEFAULT_MYSQL_SINGLE_TRANSACTIONBackup in a single transactionTRUE
DEFAULT_MYSQL_STORED_PROCEDURESBackup stored proceduresTRUE
DEFAULT_MYSQL_ENABLE_TLSEnable TLS functionalityFALSE
DEFAULT_MYSQL_TLS_VERIFY(optional) If using TLS (by means of MYSQL_TLS_* variables) verify remote hostFALSE
DEFAULT_MYSQL_TLS_VERSIONWhat TLSv1.1v1.2v1.3 version to utilizeTLSv1.1,TLSv1.2,TLSv1.3
DEFAULT_MYSQL_TLS_CA_FILEFilename to load custom CA certificate for connecting via TLS/etc/ssl/cert.pemx
DEFAULT_MYSQL_TLS_CERT_FILEFilename to load client certificate for connecting via TLSx
DEFAULT_MYSQL_TLS_KEY_FILEFilename to load client key for connecting via TLSx
Microsoft SQL
VariableDescriptionDefault_FILE
DEFAULT_PORTMicrosoft SQL Port1433x
DEFAULT_MSSQL_MODEBackupDATABASE orTRANSACTION logsDATABASE
MongoDB
VariableDescriptionDefault_FILE
DEFAULT_AUTH(Optional) Authentication Databasex
DEFAULT_PORTMongoDB Port27017x
MONGO_CUSTOM_URIIf you wish to override the MongoDB Connection string enter it here e.g.mongodb+srv://username:password@cluster.id.mongodb.netx
This environment variable will be parsed and populate theDB_NAME andDB_HOST variables to properly build your backup filenames.
You can override them by making your own entries
Postgresql
VariableDescriptionDefault_FILE
DEFAULT_AUTH(Optional) Authentication Databasex
DEFAULT_BACKUP_GLOBALSBackup Globals as part of backup procedure
DEFAULT_EXTRA_BACKUP_OPTSPass extra arguments to the backup command only, add them here e.g.--extra-command
DEFAULT_EXTRA_ENUMERATION_OPTSPass extra arguments to the database enumeration command only, add them here e.g.--extra-command
DEFAULT_EXTRA_OPTSPass extra arguments to the backup and database enumeration command, add them here e.g.--extra-command
DEFAULT_PORTPostgreSQL Port5432x
Redis
VariableDescriptionDefault_FILE
DEFAULT_PORTDefault Redis Port6379x
DEFAULT_EXTRA_ENUMERATION_OPTSPass extra arguments to the database enumeration command only, add them here e.g.--extra-command
Default Storage Options

Options that are related to the value ofDEFAULT_BACKUP_LOCATION

Filesystem

IfDEFAULT_BACKUP_LOCTION =FILESYSTEM then the following options are used:

VariableDescriptionDefault
DEFAULT_CREATE_LATEST_SYMLINKCreate a symbolic link pointing to last backup in this format:latest-(DB_TYPE)_(DB_NAME)_(DB_HOST)TRUE
DEFAULT_FILESYSTEM_PATHDirectory where the database dumps are kept./backup
DEFAULT_FILESYSTEM_PATH_PERMISSIONPermissions to apply to backup directory700
DEFAULT_FILESYSTEM_ARCHIVE_PATHOptional Directory where the database dumps archives are kept${DEFAULT_FILESYSTEM_PATH}/archive/
DEFAULT_FILESYSTEM_PERMISSIONPermissions to apply to files.600
S3

IfDEFAULT_BACKUP_LOCATION =S3 then the following options are used:

ParameterDescriptionDefault_FILE
DEFAULT_S3_BUCKETS3 Bucket name e.g.mybucketx
DEFAULT_S3_KEY_IDS3 Key ID (Optional)x
DEFAULT_S3_KEY_SECRETS3 Key Secret (Optional)x
DEFAULT_S3_PATHS3 Pathname to save to (must NOT end in a trailing slash e.g. 'backup')x
DEFAULT_S3_REGIONDefine region in which bucket is defined. Example:ap-northeast-2x
DEFAULT_S3_HOSTHostname (and port) of S3-compatible service, e.g.minio:8080. Defaults to AWS.x
DEFAULT_S3_PROTOCOLProtocol to connect toDEFAULT_S3_HOST. Eitherhttp orhttps. Defaults tohttps.httpsx
DEFAULT_S3_EXTRA_OPTSAdd any extra options to the end of theaws-cli process executionx
DEFAULT_S3_CERT_CA_FILEMap a volume and point to your custom CA Bundle for verification e.g./certs/bundle.pemx
OR
DEFAULT_S3_CERT_SKIP_VERIFYSkip verifying self signed certificates when connectingTRUE
  • WhenDEFAULT_S3_KEY_ID and/orDEFAULT_S3_KEY_SECRET is not set, will try to use IAM role assigned (if any) for uploading the backup files to S3 bucket.
Azure

IfDEFAULT_BACKUP_LOCATION =blobxfer then the following options are used:.

ParameterDescriptionDefault_FILE
DEFAULT_BLOBXFER_STORAGE_ACCOUNTMicrosoft Azure Cloud storage account name.x
DEFAULT_BLOBXFER_STORAGE_ACCOUNT_KEYMicrosoft Azure Cloud storage account key.x
DEFAULT_BLOBXFER_REMOTE_PATHRemote Azure path/docker-db-backupx
DEFAULT_BLOBXFER_MODEAzure Storage mode e.g.auto,file,append,block orpageautox
  • WhenDEFAULT_BLOBXFER_MODE is set to auto it will use blob containers by default. If theDEFAULT_BLOBXFER_REMOTE_PATH path does not exist a blob container with that name will be created.

This service uploads files from backup targed directoryDEFAULT_FILESYSTEM_PATH.If the a cleanup configuration inDEFAULT_CLEANUP_TIME is defined, the remote directory on Azure storage will also be cleaned automatically.

Hooks
Path Options
ParameterDescriptionDefault
DEFAULT_SCRIPT_LOCATION_PRELocation on filesystem inside container to execute bash scripts pre backup/assets/scripts/pre/
DEFAULT_SCRIPT_LOCATION_POSTLocation on filesystem inside container to execute bash scripts post backup/assets/scripts/post/
DEFAULT_PRE_SCRIPTFill this variable in with a command to execute pre backing up
DEFAULT_POST_SCRIPTFill this variable in with a command to execute post backing up
Pre Backup

If you want to execute a custom script before a backup starts, you can drop bash scripts with the extension of.sh in the location defined inDB01_SCRIPT_LOCATION_PRE. See the following example to utilize:

$ cat pre-script.sh##!/bin/bash# #### Example Pre Script# #### $1=DBXX_TYPE (Type of Backup)# #### $2=DBXX_HOST (Backup Host)# #### $3=DBXX_NAME (Name of Database backed up# #### $4=BACKUP START TIME (Seconds since Epoch)# #### $5=BACKUP FILENAME (Filename)echo"${1} Backup Starting on${2} for${3} at${4}. Filename:${5}"
## script DBXX_TYPE DBXX_HOST DBXX_NAME STARTEPOCH BACKUP_FILENAME${f} "${backup_job_db_type}" "${backup_job_db_host}" "${backup_job_db_name}" "${backup_routines_start_time}" "${backup_job_file}"

Outputs the following on the console:

mysql Backup Starting on example-db for example at 1647370800. Filename: mysql_example_example-db_202200315-000000.sql.bz2

Post backup

If you want to execute a custom script at the end of a backup, you can drop bash scripts with the extension of.sh in the location defined inDB01_SCRIPT_LOCATION_POST. Also to support legacy users/assets/custom-scripts is also scanned and executed.See the following example to utilize:

$ cat post-script.sh##!/bin/bash# #### Example Post Script# #### $1=EXIT_CODE (After running backup routine)# #### $2=DBXX_TYPE (Type of Backup)# #### $3=DBXX_HOST (Backup Host)# #### #4=DBXX_NAME (Name of Database backed up# #### $5=BACKUP START TIME (Seconds since Epoch)# #### $6=BACKUP FINISH TIME (Seconds since Epoch)# #### $7=BACKUP TOTAL TIME (Seconds between Start and Finish)# #### $8=BACKUP FILENAME (Filename)# #### $9=BACKUP FILESIZE# #### $10=HASH (If CHECKSUM enabled)# #### $11=MOVE_EXIT_CODEecho"${1}${2} Backup Completed on${3} for${4} on${5} ending${6} for a duration of${7} seconds. Filename:${8} Size:${9} bytes MD5:${10}"
  ## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE  ${f} "${exit_code}" "${dbtype}" "${backup_job_db_host}" "${backup_job_db_name}" "${backup_routines_start_time}" "${backup_routines_finish_time}" "${backup_routines_total_time}" "${backup_job_file}" "${filesize}" "${checksum_value}" "${move_exit_code}

Outputs the following on the console:

0 mysql Backup Completed on example-db for example on 1647370800 ending 1647370920 for a duration of 120 seconds. Filename: mysql_example_example-db_202200315-000000.sql.bz2 Size: 7795 bytes Hash: 952fbaafa30437494fdf3989a662cd40 0

If you wish to change the size value from bytes to megabytes set environment variableDB01_SIZE_VALUE=megabytes

You must make your scripts executable otherwise there is an internal check that will skip trying to run it otherwise.If for some reason your filesystem or host is not detecting it right, use the environment variableDB01_POST_SCRIPT_SKIP_X_VERIFY=TRUE to bypass.

Job Backup Options

IfDEFAULT_ variables are set and you do not wish for the settings to carry over into your jobs, you can set the appropriate environment variable with the value ofunset.Otherwise, override them per backup job. Additional backup jobs can be scheduled by usingDB02_,DB03_,DB04_ ... prefixes. SeeSpecific Database Options which may overrule this list.

ParameterDescriptionDefault_FILE
DB01_TYPEType of DB Server to backupcouchinfluxmysqlmssqlpgsqlmongoredissqlite3
DB01_HOSTServer Hostname e.g.mariadb. Forsqlite3, full path to DB file e.g./backup/db.sqlite3x
DB01_NAMESchema Name e.g.databasex
DB01_USERusername for the database(s) - Can useroot for MySQLx
DB01_PASS(optional if DB doesn't require it) password for the databasex
VariableDescriptionDefault
DB01_BACKUP_LOCATIONBackup toFILESYSTEM,blobxfer orS3 compatible services like S3, Minio, WasabiFILESYSTEM
DB01_CHECKSUMEitherMD5 orSHA1 orNONEMD5
DB01_EXTRA_BACKUP_OPTSPass extra arguments to the backup command only, add them here e.g.--extra-command
DB01_EXTRA_ENUMERATION_OPTSPass extra arguments to the database enumeration command only, add them here e.g.--extra-command
DB01_EXTRA_OPTSPass extra arguments to the backup and database enumeration command, add them here e.g.--extra-command
DB01_LOG_LEVELLog output on screen and in filesINFONOTICEERRORWARNDEBUGdebug
DB01_RESOURCE_OPTIMIZEDPerform operations at a lower priority to the CPU and IO schedulerFALSE
DB01_SKIP_AVAILABILITY_CHECKBefore backing up - skip connectivity checkFALSE
Compression Options
VariableDescriptionDefault
DB01_COMPRESSIONUse either GzipGZ, Bzip2BZ, XZipXZ, ZSTDZSTD or noneNONEZSTD
DB01_COMPRESSION_LEVELNumerical value of what level of compression to use, most allow1 to93
except forZSTD which allows for1 to19
DB01_GZ_RSYNCABLEUse--rsyncable (gzip only) for faster rsync transfers and incremental backup deduplication.FALSE
DB01_ENABLE_PARALLEL_COMPRESSIONUse multiple cores when compressing backupsTRUE orFALSETRUE
DB01_PARALLEL_COMPRESSION_THREADSMaximum amount of threads to use when compressing - Integer value e.g.8autodetected
Encryption Options

Encryption will occur after compression and the resulting filename will have a.gpg suffix

VariableDescriptionDefault_FILE
DB01_ENCRYPTEncrypt file after backing up with GPGFALSE
DB01_ENCRYPT_PASSPHRASEPassphrase to encrypt file with GPGx
or
DB01_ENCRYPT_PUBLIC_KEYPath of public key to encrypt file with GPGx
DB01_ENCRYPT_PRIVATE_KEYPath of private key to encrypt file with GPGx
Scheduling Options
VariableDescriptionDefault
DB01_BACKUP_INTERVALHow often to do a backup, in minutes after the first backup. Defaults to 1440 minutes, or once per day.1440
DB01_BACKUP_BEGINWhat time to do the initial backup. Defaults to immediate. (+1)+0
Must be in one of four formats:
Absolute HHMM, e.g.2330 or0415
Relative +MM, i.e. how many minutes after starting the container, e.g.+0 (immediate),+10 (in 10 minutes), or+90 in an hour and a half
Full datestamp e.g.2023-12-21 23:30:00
Cron expression e.g.30 23 * * *Understand the format -BACKUP_INTERVAL is ignored
DB01_CLEANUP_TIMEValue in minutes to delete old backups (only fired when backup interval executes)FALSE
1440 would delete anything above 1 day old. You don't need to set this variable if you want to hold onto everything.
DB01_ARCHIVE_TIMEValue in minutes to move all files files older than (x) fromDB01_BACKUP_FILESYSTEM_PATH
toDB01_BACKUP_FILESYSTEM_ARCHIVE_PATH - which is useful when pairing against an external backup system.
DB01_BACKUP_BLACKOUT_BEGINUseHHMM notation to start a blackout period where no backups occur eg0420
DB01_BACKUP_BLACKOUT_ENDUseHHMM notation to set the end period where no backups occur eg0430
Specific Database Options
CouchDB
VariableDescriptionDefault_FILE
DB01_PORTCouchDB Port5984x
InfluxDB
VariableDescriptionDefault_FILE
DB01_PORTInfluxDB Portx
Version 1.x8088
Version 2.x8086
DB01_INFLUX_VERSIONWhat Version of Influx are you backing up from1.x or2 series - amd64 and aarch/armv8 only for22

Your Organization will be mapped toDB_USER and your root token will need to be mapped toDB_PASS.You may useDB_NAME=ALL to backup the entire set of databases.ForDB_HOST use syntax ofhttp(s)://db-name

MariaDB/MySQL
VariableDescriptionDefault_FILE
DB01_EXTRA_OPTSPass extra arguments to the backup and database enumeration command, add them here e.g.--extra-command
DB01_EXTRA_BACKUP_OPTSPass extra arguments to the backup command only, add them here e.g.--extra-command
DB01_EXTRA_ENUMERATION_OPTSPass extra arguments to the database enumeration command only, add them here e.g.--extra-command
DB01_NAMESchema Name e.g.database orALL to backup all databases the user has access to.
Backup multiple by separating with commas egdb1,db2x
DB01_NAME_EXCLUDEIf usingALL - use this as to exclude databases separated via commas from being backed upx
DB01_SPLIT_DBIf usingALL - use this to split each database into its own file as opposed to one singular fileFALSE
DB01_PORTMySQL / MariaDB Port3306x
DB01_MYSQL_EVENTSBackup Events forTRUE
DB01_MYSQL_MAX_ALLOWED_PACKETMax allowed packet512M
DB01_MYSQL_SINGLE_TRANSACTIONBackup in a single transactionTRUE
DB01_MYSQL_STORED_PROCEDURESBackup stored proceduresTRUE
DB01_MYSQL_ENABLE_TLSEnable TLS functionalityFALSE
DB01_MYSQL_TLS_VERIFY(optional) If using TLS (by means of MYSQL_TLS_* variables) verify remote hostFALSE
DB01_MYSQL_TLS_VERSIONWhat TLSv1.1v1.2v1.3 version to utilizeTLSv1.1,TLSv1.2,TLSv1.3
DB01_MYSQL_TLS_CA_FILEFilename to load custom CA certificate for connecting via TLS/etc/ssl/cert.pemx
DB01_MYSQL_TLS_CERT_FILEFilename to load client certificate for connecting via TLSx
DB01_MYSQL_TLS_KEY_FILEFilename to load client key for connecting via TLSx
Microsoft SQL
VariableDescriptionDefault_FILE
DB01_PORTMicrosoft SQL Port1433x
DB01_MSSQL_MODEBackupDATABASE orTRANSACTION logsDATABASE
MongoDB
VariableDescriptionDefault_FILE
DB01_AUTH(Optional) Authentication Database
DB01_PORTMongoDB Port27017x
DB01_MONGO_CUSTOM_URIIf you wish to override the MongoDB Connection string enter it here e.g.mongodb+srv://username:password@cluster.id.mongodb.netx
This environment variable will be parsed and populate theDB_NAME andDB_HOST variables to properly build your backup filenames.
You can override them by making your own entries
Postgresql
VariableDescriptionDefault_FILE
DB01_AUTH(Optional) Authentication Database
DB01_BACKUP_GLOBALSBackup Globals after backing up database (forcesTRUE if `_NAME=ALL``)FALSE
DB01_EXTRA_OPTSPass extra arguments to the backup and database enumeration command, add them here e.g.--extra-command
DB01_EXTRA_BACKUP_OPTSPass extra arguments to the backup command only, add them here e.g.--extra-command
DB01_EXTRA_ENUMERATION_OPTSPass extra arguments to the database enumeration command only, add them here e.g.--extra-command
DB01_NAMESchema Name e.g.database orALL to backup all databases the user has access to.
Backup multiple by separating with commas egdb1,db2x
DB01_SPLIT_DBIf usingALL - use this to split each database into its own file as opposed to one singular fileFALSE
DB01_PORTPostgreSQL Port5432x
Redis
VariableDescriptionDefault_FILE
DB01_EXTRA_OPTSPass extra arguments to the backup and database enumeration command, add them here e.g.--extra-command
DB01_EXTRA_BACKUP_OPTSPass extra arguments to the backup command only, add them here e.g.--extra-command
DB01_PORTRedis Port6379x
SQLite
VariableDescriptionDefault_FILE
DB01_HOSTEnter the full path to DB file e.g./backup/db.sqlite3x
Specific Storage Options

Options that are related to the value ofDB01_BACKUP_LOCATION

Filesystem

IfDB01_BACKUP_LOCTION =FILESYSTEM then the following options are used:

VariableDescriptionDefault
DB01_CREATE_LATEST_SYMLINKCreate a symbolic link pointing to last backup in this format:latest-(DB_TYPE)-(DB_NAME)-(DB_HOST)TRUE
DB01_FILESYSTEM_PATHDirectory where the database dumps are kept./backup
DB01_FILESYSTEM_PATH_PERMISSIONPermissions to apply to backup directory700
DB01_FILESYSTEM_ARCHIVE_PATHOptional Directory where the database dumps archives are kept${DB01_FILESYSTEM_PATH}/archive/
DB01_FILESYSTEM_PERMISSIONDirectory and File permissions to apply to files.600
S3

IfDB01_BACKUP_LOCATION =S3 then the following options are used:

ParameterDescriptionDefault_FILE
DB01_S3_BUCKETS3 Bucket name e.g.mybucketx
DB01_S3_KEY_IDS3 Key ID (Optional)x
DB01_S3_KEY_SECRETS3 Key Secret (Optional)x
DB01_S3_PATHS3 Pathname to save to (must NOT end in a trailing slash e.g. 'backup')x
DB01_S3_REGIONDefine region in which bucket is defined. Example:ap-northeast-2x
DB01_S3_HOSTHostname (and port) of S3-compatible service, e.g.minio:8080. Defaults to AWS.x
DB01_S3_PROTOCOLProtocol to connect toDB01_S3_HOST. Eitherhttp orhttps. Defaults tohttps.httpsx
DB01_S3_EXTRA_OPTSAdd any extra options to the end of theaws-cli process executionx
DB01_S3_CERT_CA_FILEMap a volume and point to your custom CA Bundle for verification e.g./certs/bundle.pemx
OR
DB01_S3_CERT_SKIP_VERIFYSkip verifying self signed certificates when connectingTRUE

WhenDB01_S3_KEY_ID and/orDB01_S3_KEY_SECRET is not set, will try to use IAM role assigned (if any) for uploading the backup files to S3 bucket.

Azure

IfDB01_BACKUP_LOCATION =blobxfer then the following options are used:.

ParameterDescriptionDefault_FILE
DB01_BLOBXFER_STORAGE_ACCOUNTMicrosoft Azure Cloud storage account name.x
DB01_BLOBXFER_STORAGE_ACCOUNT_KEYMicrosoft Azure Cloud storage account key.x
DB01_BLOBXFER_REMOTE_PATHRemote Azure path/docker-db-backupx
DB01_BLOBXFER_REMOTE_MODEAzure Storage mode e.g.auto,file,append,block orpageautox
  • WhenDEFAULT_BLOBXFER_MODE is set to auto it will use blob containers by default. If theDEFAULT_BLOBXFER_REMOTE_PATH path does not exist a blob container with that name will be created.

This service uploads files from backup directoryDB01_BACKUP_FILESYSTEM_PATH.If the a cleanup configuration inDB01_CLEANUP_TIME is defined, the remote directory on Azure storage will also be cleaned automatically.

Hooks
Path Options
ParameterDescriptionDefault
DB01_SCRIPT_LOCATION_PRELocation on filesystem inside container to execute bash scripts pre backup/assets/scripts/pre/
DB01_SCRIPT_LOCATION_POSTLocation on filesystem inside container to execute bash scripts post backup/assets/scripts/post/
DB01_PRE_SCRIPTFill this variable in with a command to execute pre backing up
DB01_POST_SCRIPTFill this variable in with a command to execute post backing up
Pre Backup

If you want to execute a custom script before a backup starts, you can drop bash scripts with the extension of.sh in the location defined inDB01_SCRIPT_LOCATION_PRE. See the following example to utilize:

$ cat pre-script.sh##!/bin/bash# #### Example Pre Script# #### $1=DB01_TYPE (Type of Backup)# #### $2=DB01_HOST (Backup Host)# #### $3=DB01_NAME (Name of Database backed up# #### $4=BACKUP START TIME (Seconds since Epoch)# #### $5=BACKUP FILENAME (Filename)echo"${1} Backup Starting on${2} for${3} at${4}. Filename:${5}"
## script DB01_TYPE DB01_HOST DB01_NAME STARTEPOCH BACKUP_FILENAME${f} "${backup_job_db_type}" "${backup_job_db_host}" "${backup_job_db_name}" "${backup_routines_start_time}" "${backup_job_filename}"

Outputs the following on the console:

mysql Backup Starting on example-db for example at 1647370800. Filename: mysql_example_example-db_202200315-000000.sql.bz2

Post backup

If you want to execute a custom script at the end of a backup, you can drop bash scripts with the extension of.sh in the location defined inDB01_SCRIPT_LOCATION_POST. Also to support legacy users/assets/custom-scripts is also scanned and executed.See the following example to utilize:

$ cat post-script.sh##!/bin/bash# #### Example Post Script# #### $1=EXIT_CODE (After running backup routine)# #### $2=DB_TYPE (Type of Backup)# #### $3=DB_HOST (Backup Host)# #### #4=DB_NAME (Name of Database backed up# #### $5=BACKUP START TIME (Seconds since Epoch)# #### $6=BACKUP FINISH TIME (Seconds since Epoch)# #### $7=BACKUP TOTAL TIME (Seconds between Start and Finish)# #### $8=BACKUP FILENAME (Filename)# #### $9=BACKUP FILESIZE# #### $10=HASH (If CHECKSUM enabled)# #### $11=MOVE_EXIT_CODEecho"${1}${2} Backup Completed on${3} for${4} on${5} ending${6} for a duration of${7} seconds. Filename:${8} Size:${9} bytes MD5:${10}"
  ## script EXIT_CODE DB_TYPE DB_HOST DB_NAME STARTEPOCH FINISHEPOCH DURATIONEPOCH BACKUP_FILENAME FILESIZE CHECKSUMVALUE  ${f} "${exit_code}" "${dbtype}" "${dbhost}" "${dbname}" "${backup_routines_start_time}" "${backup_routines_finish_time}" "${backup_routines_total_time}" "${backup_job_filename}" "${filesize}" "${checksum_value}" "${move_exit_code}

Outputs the following on the console:

0 mysql Backup Completed on example-db for example on 1647370800 ending 1647370920 for a duration of 120 seconds. Filename: mysql_example_example-db_202200315-000000.sql.bz2 Size: 7795 bytes Hash: 952fbaafa30437494fdf3989a662cd40 0

If you wish to change the size value from bytes to megabytes set environment variableDB01_SIZE_VALUE=megabytes

You must make your scripts executable otherwise there is an internal check that will skip trying to run it otherwise.If for some reason your filesystem or host is not detecting it right, use the environment variableDB01_POST_SCRIPT_SKIP_X_VERIFY=TRUE to bypass.

Notifications

This image has capabilities on sending notifications via a handful of services when a backup job fails. This is a global option that cannot be individually set per backup job.

ParameterDescriptionDefault
ENABLE_NOTIFICATIONSEnable NotificationsFALSE
NOTIFICATION_TYPECUSTOMEMAILMATRIXMATTERMOSTROCKETCHAT - Seperate Multiple by commas
Custom Notifications

The following is sent to the custom script. Use how you wish:

$1 unix timestamp$2 logfile$3 errorcode$4 subject$5 body/error message
ParameterDescriptionDefault
NOTIFICATION_CUSTOM_SCRIPTPath and name of custom script to execute notification.
Email Notifications

See more details in the base image listed above for more mail environment variables.

ParameterDescriptionDefault_FILE
MAIL_FROMWhat email address to send mail from for errors
MAIL_TOWhat email address to send mail to for errors. Send to multiple by seperating with comma.
SMTP_HOSTWhat SMTP server to use for sending mailx
SMTP_PORTWhat SMTP port to use for sending mailx
Matrix Notifications

Fetch aMATRIX_ACCESS_TOKEN:

curl -XPOST -d '{"type":"m.login.password", "user":"myuserid", "password":"mypass"}' "https://matrix.org/_matrix/client/r0/login"

Copy the JSON responseaccess_token that will look something like this:

{"access_token":"MDAxO...blahblah","refresh_token":"MDAxO...blahblah","home_server":"matrix.org","user_id":"@myuserid:matrix.org"}
ParameterDescriptionDefault_FILE
MATRIX_HOSTURL (https://matrix.example.com) of Matrix Homeserverx
MATRIX_ROOMRoom ID eg\!abcdef:example.com to send to. Send to multiple by seperating with comma.x
MATRIX_ACCESS_TOKENAccess token of user authorized to send to roomx
Mattermost Notifications
ParameterDescriptionDefault_FILE
MATTERMOST_WEBHOOK_URLFull URL to send webhook notifications tox
MATTERMOST_RECIPIENTChannel or User to send Webhook notifications to. Send to multiple by seperating with comma.x
MATTERMOST_USERNAMEUsername to send as egtiredofitx
Rocketchat Notifications
ParameterDescriptionDefault_FILE
ROCKETCHAT_WEBHOOK_URLFull URL to send webhook notifications tox
ROCKETCHAT_RECIPIENTChannel or User to send Webhook notifications to. Send to multiple by seperating with comma.x
ROCKETCHAT_USERNAMEUsername to send as egtiredofitx

Maintenance

Shell Access

For debugging and maintenance purposes you may want access the containers shell.

bash docker exec -it (whatever your container name is) bash

Manual Backups

Manual Backups can be performed by entering the container and typingbackup-now. This will execute all the backup tasks that are scheduled by means of theBACKUPXX_ variables. Alternatively if you wanted to execute a job on its own you could simply typebackup01-now (or whatever your number would be). There is no concurrency, and jobs will be executed sequentially.

  • Recently there was a request to have the container work with Kubernetes cron scheduling. This can theoretically be accomplished by setting the containerMODE=MANUAL and then settingMANUAL_RUN_FOREVER=FALSE - You would also want to disable a few features from the upstream base images specificallyCONTAINER_ENABLE_SCHEDULING andCONTAINER_ENABLE_MONITORING. This should allow the container to start, execute a backup by executing and then exit cleanly. An alternative way to running the script is to execute/etc/services.available/10-db-backup/run.

Restoring Databases

Entering in the container and executingrestore will execute a menu based script to restore your backups - MariaDB, Postgres, and Mongo supported.

You will be presented with a series of menus allowing you to choose:

  • What file to restore
  • What type of DB Backup
  • What Host to restore to
  • What Database Name to restore to
  • What Database User to use
  • What Database Password to use
  • What Database Port to use

The image will try to do auto detection based on the filename for the type, hostname, and database name.The image will also allow you to use environment variables or Docker secrets used to backup the images

The script can also be executed skipping the interactive mode by using the following syntax/

`restore <filename> <db_type> <db_hostname> <db_name> <db_user> <db_pass> <db_port>`

If you only enter some of the arguments you will be prompted to fill them in.

Support

These images were built to serve a specific need in a production environment and gradually have had more functionality added based on requests from the community.

Usage

  • TheDiscussions board is a great place for working with the community on tips and tricks of using this image.
  • Sponsor me for personalized support

Bugfixes

  • Please, submit aBug Report if something isn't working as expected. I'll do my best to issue a fix in short order.

Feature Requests

  • Feel free to submit a feature request, however there is no guarantee that it will be added, or at what timeline.
  • Sponsor me regarding development of features.

Updates

  • Best effort to track upstream changes, More priority if I am actively using the image in a production environment.
  • Sponsor me for up to date releases.

License

MIT. SeeLICENSE for more details.

About

Backup multiple database types on a scheduled basis with many customizable options

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

 
 
 

[8]ページ先頭

©2009-2025 Movatter.jp