Hodgepodge 3xNAS Part 12 Snapraid

Published on June 4, 2023 at 5:31 am by LEW

Introduction

In this post we will take a look at the snapraid application. This program provides fault tolerance through a form of stand alone parity.

The snapraid website describes it as a backup program for disk arrays. Not to disparage there description, but this is more like RAID 5 parity. While you can recover from a variety of failures, it is not a replacement for an actual backup. The number of recoverable disk failures is dependent on the number of parity disks being used (up to six).

The snapraid program, by itself, is not automated. Unlike RAID arrays, parity data is not written immediately. Running snapraid on a regular basis is something that must be setup within your operating system.

Installation of snapraid

In our last post we installed both mergerfs and snapraid. We will continue with the example from that post. Please review it before continuing. If not already done, then to install snapraid, use the following command.

apt install snapraid

Setting up snapraid

There are three things we need to do to start using snapraid. We need to setup a parity drive, create folders for content files, and create a configuration file.

Parity Drive: The rule of thumb for a parity drive is that it must be as big as the biggest drive in the drive array we are using. Continuing with our previous example, we have two drives in our mergerfs array; sda3 at ~ 9 GB and sdb1 at ~ 20 GB. So our parity drive must be 20 GB in size.

Once added, the drive needs to be partitioned and formatted. Per the previous post I used cfdisk and mkfs to partition and format the drive.

We then create a directory for our mount point at /srv/parity1.

mkdir /srv/parity1

To activate the drive on boot up, we add a line in /etc/fstab to mount the drive at boot. For example;

UUID=3ca405ab-10fe-4f29-a2a8-cb59d132e5f1 /srv/parity1 
    ext4 defaults 0 2

Content Files: In addition to the parity drive, we will be creating multiple copies of the snapraid.content file. This file contains a list of all files in the drive array with check-sums and other relevant information. We will create two copies. One at /var/snnapraid, and one at /home/snapraid.

mkdir /var/snapraid
mkdir /home/snapraid

Configuration file: The snapraid configuration file lives at /etc/snapraid.conf. And it is what ties everything together. For the example form the last post, that we are continuing, this is what my configuration file looks like.

parity /srv/parity1/snapraid.parity

content /var/snapraid/snapraid.content
content /home/snapraid/snapraid.content

data Disk1 /srv/disk1/
data Disk2 /srv/disk2/

nohidden
exclude *.unrecoverable
exclude /tmp/
exclude /lost+found/

The first line identifies the parity file location and name. Since we are saving the information to our parity drive we setup, so this line points to the mount point and file name.

The next two lines specify multiple locations for the content file. One on the host system and one on the array itself.

Then we add our data disk mount points next. Since there are two of them, we have two entries.

The next group of stanzas tell us what files to not calculate parity for. These are temporary files for the most part that there is no need to waste resources backing up.

Using snapraid

Once everything is setup we can run snapraid for the firs time with the sync option.

snapraid sync

Depending on who you are running the command as, you may get permission errors. Go through the various drives to determine who owns them and what access permissions they have. Make your modifications and run again.

Other Options

Scrub: We can use the scrub option to check the validity of our data. This option checks a small percentage of the array every time it is run. There are some options available to adjust which files are scrubbed also.

snapraid scrub

Smart: This option requires that smartclt be installed. In Debian this can be done with the following command.

apt install smartmontools

Running the command will display a SMART report for your disk array, which among other things will give estimates on drive failures.

snapraid smart

Status: This option prints out the status of the drive array.

snapraid status

Please refer to the documentation for a complete list of options.

Conclusion

With snapraid up and running, we can replace a failed drive within our array and recover the data using the parity and content files. However we must arrange to run snapraid either manually every now and then to protect ourselves from drive failure.  The process can be automated with a cron job to run at specific intervals if desired.

Before moving on to actual hardware, we need to take a look at one last option, Logical Volume Manager (LVM), Which will be the subject of the next post.

Hodgepodge 3xNAS Part 1 Project Overview

Hodgepodge 3xNAS Part 2 Software Choices

Hodgepodge 3xNAS Part 3 Virtual Install

Hodgepodge 3xNAS Part 4 Initial Configuration

Hodgepodge 3xNAS Part 5 Need a GUI?

Hodgepodge 3xNAS Part 6 Add a Storage Drive

Hodgepodge 3xNAS Part 7 SMB/CIFS

Hodgepodge 3xNAS Part 8 Expanded Storage

Hodgepodge 3xNAS Part 9 Making RAID

Hodgepodge 3xNAS Part 10 Cockpit Web GUI RAID 5

Hodgepodge 3xNAS Part 11 Mergerfs

Hodgepodge 3xNAS Part 12 Snapraid

Hodgepodge 3xNAS Part 13 LVM

Hodgepodge 3xNAS Part 14 The Server Hardware

Hodgepodge 3xNAS Part 15 The Server Operating System

Hodgepodge 3xNAS Part 16 Cockpit Install

Hodgepodge 3xNAS Part 17 SAMBA Setup

Hodgepodge 3xNAS Part 18 PLEX vs Kodi

Add New Comment

Your email address will not be published. Required fields are marked *