- Notifications
You must be signed in to change notification settings - Fork46
The definitive all-in-one SnapRAID script on Linux. Diff, sync, scrub are things of the past. Manage SnapRAID and much, much more!
License
auanasgheps/snapraid-aio-script
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
The definitive all-in-oneSnapRAID script on Linux. I hope you'll agree :).
There are many SnapRAID scripts out there, but none has the features I want. So I made my own, inspired by existing solutions.
It is meant to be run periodically (daily), do the heavy lifting and send an email you will actually read.
Supports single and dual parity configurations. It is highly customizable and has been tested with Debian 11/12 andOpenMediaVault 6/7.
Contributions are welcome!
- After some preliminary checks, the script will execute
snapraid diffto figure out if parity info is out of date, which means checking for changes since the last execution. During this step, the script will ensure drives are fine by reading parity and content files. - One of the following will happen:
- If parity info is out of syncand the number of deleted or changed files exceed the threshold you have configured itstops. You may want to take a look to the output log.
- If parity info is out of syncand the number of deleted or changed files exceed the threshold, you can stillforce a sync after a number of warnings. It's useful If you often get a false alarm but you're confident enough. This is called "Sync with threshold warnings"
- Instead of forcing a sync based on the number of deleted files, you may consider the
ADD_DEL_THRESHOLDfeature, by allowing a sync that would otherwise violate the delete threshold, if the ratio of added to deleted files is greater than the value set.
- Instead of forcing a sync based on the number of deleted files, you may consider the
- If parity info is out of syncbut the number of deleted or changed files did not exceed the threshold, itexecutes a sync to update the parity info.
- When the parity info is in sync, either because nothing has changed or after a successfully sync, it runs the
snapraid scrubcommand to validate the integrity of the data, both the files and the parity info. If sync was cancelled or other issues were found, scrub will not be run.- Note that each run of the scrub command will validate only a (configurable) portion of parity info to avoid having a long running job and affecting the performance of the server.
- Scrub frequency can also be customized in case you don't want to do it every time the script runs.
- It is still recommended to run scrub frequently.
- Extra information can be added, like SnapRAID's disk health report or SnapRAID array status.
- When the script is done sends an email with the results, both in case of error or success, and triggers any 3rd party notifications configured.
- Docker container management
- Manage containers before SnapRAID operations and restore them when finished. It avoids nasty errors aboud data being written during SnapRAID sync.
- Support for local and remote Docker instances. Also manage multiple remote Docker instances at once.
- Note: Remote Docker instances require SSH passwordless access.
- You can either choose to pause or stop your containers.
- Custom Hooks
- Define shell commands or scripts to run before and after SnapRAID operations.
- Multiple configuration files
- Use a different configuration file when running the script instead of the default config
- 3rd Party notification support
- Healthchecks.io, Telegram and Discord can be used to track script execution time, status and promptly alert about errors.
- You can also get notified with the
Snapraid SMART logandSnapraid Status - Notification Hook: if your favourite notification service is not supported by this script, you can use a custom notification command or another mail binary
- Important messages are also sent to the system log.
- Emails are still the best place to get detailed but readable information.
Many options can be changed to your taste, their behavior is documented in the config file.If you don't know what to do, I recommend using the default values and see how it performs.
- Sync options
- Sync always (Forced Sync).
- Sync after a number of breached threshold warnings.
- Sync only if thresholds warnings are not breached (enabled by default).
- Sync even if the delete threshold has been breached, but the ratio of added to deleted files is greater than the value set.
- User definable thresholds for deleted and updated files.
- Scrub options
- Enable or disable scrub job.
- Delayed option, disabled by default. Run scrub only after a number of script executions, e.g. every 7 times. If you don't want to scrub your array every time, this one is for you.
- Data to be scrubbed - by default 5% older than 10 days.
- Scrub new data - scrub the data that was just added by the sync.
- Pre-hashing - enabled by default. Mitigates the lack of ECC memory, reading data twice to avoid silent read errors.
- Force zero size sync - disabled by default. Forces the operation of syncing a file with zero size that before was not. Use with caution!
- Snapraid Status - disabled by default. Shows the status of the array.
- This info can also be sent to Telegram or Discord
- SMART Log - enabled by default. A SnapRAID report for disks health status.
- This info can also be sent to Telegram or Discord
- Verbosity option - disabled by default. When enabled, includes the TOUCH and DIFF commands output. Please note email will be huge and mostly unreadable.
- SnapRAID Output (log) retention - disabled by default (log is overriden every run)
- Detailed output retention for each run
- You can choose the amount of days and the path, by default set to the user home
- Healthchecks.io, Telegram and Discord integration
- If you don't read your emails every day, this is a great one for you, since you can be quickly informed if things go wrong.
- The script will report to Healthchecks.io, Telegram and Discord when is started and when is completed. If there's a failure it's included as well.
- Healthchecks.io only: If the script ends with aWARNING message, it will reportDOWN to Healthchecks.io, if the message isCOMPLETED it will reportUP.
- Healthchecks.io only: This service will also show how much time the script takes to complete.
- Notification Hook
- Made for external services or mail binaries with different commands than
mailx. - Configure the path of the script or the mail binary to be invoked.
- You can still use native services since it only replaces the standard email.
- Made for external services or mail binaries with different commands than
- Update Check - enabled by default
- The script will check via GitHub if there's an update and alert the user via the configured notification systems
- If you don't like this, it can be disabled
- Docker Container management
- A list of containers you want to be interrupted before running actions and restored when completed.
- Docker mode - choose to pause/unpause or to stop/restart your containers
- Docker remote - if docker is running on a remote machine
- Multiple Configuration files
- By default the script will use the predefined config file
script-config.shthat must be placed in the same folder - You can specify another file when running the script like
snapraid-aio-script.sh /home/alternate_config.sh
- By default the script will use the predefined config file
- Custom Hooks
- Commands or scripts to be run before and after SnapRAID operations.
- Option to display friendly name to in the email output
- Spindown - spindown disks after the script has completed operations. Uses a rewritten version ofhd-idle.
You can also change more advanced options such SnapRAID binary location, log file location and mail binary. If your mail binary uses different commands thanmailx, use the Notification Hook feature.
This script produces emails that don't contain a list of changed files to improve clarity.
You can re-enable full output in the email by switching the optionVERBOSITY. The full report is available in/tmp/snapRAID.out but will be replaced after each run, or deleted when the system is shut down. You can enable the retention policy to keep logs for some days and customize the folder location.
Here's an example email report.
##[COMPLETED] DIFF + SYNC + SCRUB Jobs (SnapRAID on omv-test.local)SnapRAID Script Job started[Tue 20 Apr 11:43:37 CEST 2021]Running SnapRAID version 11.5SnapRAID AIO Script version X.YZ----------##PreprocessingHealthchecks.io integration is enabled.Configuration file found.Checking if all parity and content files are present.All parity files found.All content files found.Docker containers management is enabled.###Stopping Containers[Tue 20 Apr 11:43:37 CEST 2021]Stopping Container - Code-servercode-serverStopping Container - Portainerportainer----------##Processing###SnapRAID TOUCH[Tue 20 Apr 11:43:37 CEST 2021]Checking for zero sub-second files.No zero sub-second timestamp files found.TOUCH finished[Tue 20 Apr 11:43:38 CEST 2021]###SnapRAID DIFF[Tue 20 Apr 11:43:38 CEST 2021]DIFF finished[Tue 20 Apr 11:43:38 CEST 2021]**SUMMARY of changes - Added[0] - Deleted[0] - Moved[0] - Copied[0] - Updated[1]**There are no deleted files, that's fine.There are updated files. The number of updated files (1) is below the threshold of (500).SYNC is authorized.[Tue 20 Apr 11:43:38 CEST 2021]###SnapRAID SYNC[Tue 20 Apr 11:43:38 CEST 2021]Self test... Loading state from /srv/dev-disk-by-label-DISK1/snapraid.content... Scanning disk DATA1... Scanning disk DATA2... Using 0 MiB of memory for the file-system. Initializing... Hashing... SYNC - Everything OK Resizing... Saving state to /srv/dev-disk-by-label-DISK1/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK2/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK3/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK4/snapraid.content... Verifying /srv/dev-disk-by-label-DISK1/snapraid.content... Verifying /srv/dev-disk-by-label-DISK2/snapraid.content... Verifying /srv/dev-disk-by-label-DISK3/snapraid.content... Verifying /srv/dev-disk-by-label-DISK4/snapraid.content... Verified /srv/dev-disk-by-label-DISK4/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK3/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK2/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK1/snapraid.content in 0 seconds Syncing... Using 32 MiB of memory for 32 cached blocks. DATA1 12% |******* DATA2 82% |************************************************ parity 0% | 2-parity 0% | raid 1% | * hash 1% | sched 11% |****** misc 0% | |____________________________________________________________ wait time (total, less is better) SYNC - Everything OK Saving state to /srv/dev-disk-by-label-DISK1/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK2/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK3/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK4/snapraid.content... Verifying /srv/dev-disk-by-label-DISK1/snapraid.content... Verifying /srv/dev-disk-by-label-DISK2/snapraid.content... Verifying /srv/dev-disk-by-label-DISK3/snapraid.content... Verifying /srv/dev-disk-by-label-DISK4/snapraid.content... Verified /srv/dev-disk-by-label-DISK4/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK3/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK2/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK1/snapraid.content in 0 secondsSYNC finished[Tue 20 Apr 11:43:40 CEST 2021]###SnapRAID SCRUB[Tue 20 Apr 11:43:40 CEST 2021]Self test... Loading state from /srv/dev-disk-by-label-DISK1/snapraid.content... Using 0 MiB of memory for the file-system. Initializing... Scrubbing... Using 48 MiB of memory for 32 cached blocks. DATA1 2% | * DATA2 18% |********** parity 0% | 2-parity 0% | raid 21% |************ hash 7% |**** sched 51% |****************************** misc 0% | |____________________________________________________________ wait time (total, less is better) SCRUB - Everything OK Saving state to /srv/dev-disk-by-label-DISK1/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK2/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK3/snapraid.content... Saving state to /srv/dev-disk-by-label-DISK4/snapraid.content... Verifying /srv/dev-disk-by-label-DISK1/snapraid.content... Verifying /srv/dev-disk-by-label-DISK2/snapraid.content... Verifying /srv/dev-disk-by-label-DISK3/snapraid.content... Verifying /srv/dev-disk-by-label-DISK4/snapraid.content... Verified /srv/dev-disk-by-label-DISK4/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK3/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK2/snapraid.content in 0 seconds Verified /srv/dev-disk-by-label-DISK1/snapraid.content in 0 secondsSCRUB finished[Tue 20 Apr 11:43:41 CEST 2021]----------##PostprocessingSnapRAID SmartSnapRAID SMART report: Temp Power Error FP Size C OnDays Count TB Serial Device Disk - - - SSD 0.0 00000000000000000001 /dev/sdb DATA1 - - - SSD 0.0 01000000000000000001 /dev/sdc DATA2 - - - - 0.0 02000000000000000001 /dev/sdd parity - - - SSD 0.0 03000000000000000001 /dev/sde 2-parity - - - n/a - - /dev/sr0 - 0 - - - 0.0 - /dev/sda - The FP column is the estimated probability (in percentage) that the disk is going to fail in the next year. Probability that at least one disk is going to fail in the next year is 0%.##Restarting Containers[Tue 20 Apr 11:43:41 CEST 2021]Restarting Container - Code-servercode-serverRestarting Container - PortainerportainerAll jobs ended.[Tue 20 Apr 11:43:41 CEST 2021]Email address is set. Sending email report toyourmail@example.com[Tue 20 Apr 11:43:41 CEST 2021]
If you are running a Debian based distro (withapt package manager) the script will automatically install these dependencies for you.
python3-markdownto format emails - will be installed if not foundcurlto use Healhchecks - will be installed if not foundjq- used to send discord notifications, is a lightweight and flexible command-line JSON processorbc- used for for floating-point comparisons
Dependencies that require manual installation:
- Install the packages listed in the Requirements section if you're not running a distro with
aptpackage manager - Download the latest version fromReleases
- Extract the archive wherever you prefer
- e.g.
/usr/sbin/snapraid
- e.g.
- Give executable rights to the main script
chmod +x snapraid-aio-script.sh
- Open the config file and make changes to the config file as required.
- Every config is documented but defaults are pretty resonable, so don't make changes if you're not sure.
- When you see
""or''in some options, do not remove these characters but just fill in your data. - If you want to spindown your disks, you need to installhd-idle
- Schedule the script execution.
- I recommend running the script daily.
TIP: To use multiple config files, you can create different schedules. Just append the config file path after the script, likesnapraid-aio-script.sh /home/alternate_config.sh
It is tested on OMV6 and OMV7, but will work on other distros. In such case you may have to change the mail binary or SnapRAID location.
OMV7's SnapRAID plugins introduced support for multiple arrays. This means each SnapRAID config file does not have a predictable name, unlike what occurred with OMV6 or standard SnapRAID installs.If running on OMV7, the AIO Script will search for a SnapRAID configuration file in the new path/etc/snapraid/. If multiple arrays are found, it will inform you to adjust your configuration.
If you start with empty disks, you cannot use (yet) this script, since it expects SnapRAID files which would not be found.
First runsnapraid sync. Once completed, the array will be ready to be used with this script.
This script perfectly replaces the OMV built-in script.In the OMV GUI, browse toSystem > Scheduled Tasks and remove/disable theomv-snapraid-diff job.Also, you can igore all the settings you find atServices > SnapRAID > Diff Script Settings, since they only apply to the plugin's built-in script.
If you would like to enable automatic disk spindown after the script job runs, then you will need to installhd-idle. The version included in default Debian and Ubuntu repositories is buggy and out of date - fortunately developeradelolmo has improved the project and released an updated version.
NOTE: This script is NOT compatible with thehd-idle version found in the Debian repositories. Youmust use the updatedhd-idle binaries for spindown to work. If you receive and error such ashd-idle cannot spindown scsi disk /dev//dev/sda: then that is a sign that you are using the old/buggy version. Follow the instructions below to update.
- Remove any previously existing versions of
hd-idle, either by manually removing the binaries, or runningapt remove hd-idleto remove the version from the default respositories. - For all recent Ubuntu and Debian releases, install the developers's repository using instructionson the developer's website. The command snippet below will select the correct repository based on your current release, and add it to your apt sources.
sudo apt-get install apt-transport-httpswget -O - http://adelolmo.github.io/andoni.delolmo@gmail.com.gpg.key | sudo apt-key add -echo "deb http://adelolmo.github.io/$(lsb_release -cs) $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/adelolmo.github.io.list- Run
apt update, andapt install hd-idleto install the updated version. You do not need to specify the respository, apt will automatically install the newset version from the new repository. - In your
script-config.shfile, changeSPINDOWN=0toSPINDOWN=1to enable spindown. - If you wish to use
hd-idleas a service to manage your disks outside of the scope of the Snapraid AIO Script, refer to theseadditional instructions on the OpenMediaVault forum.
If you are using a previous version of the script, do not use your config file. Please move your preferences to the newscript-config.sh found in the archive.
- You tell me!
All rights belong to the respective creators.This script would not exist without:
About
The definitive all-in-one SnapRAID script on Linux. Diff, sync, scrub are things of the past. Manage SnapRAID and much, much more!
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Contributors9
Uh oh!
There was an error while loading.Please reload this page.