Hodgepodge 3xNAS Part 9 Making RAID 5

Published on May 25, 2023 at 2:37 pm by LEW

Introduction

In this post we will setup a RAID 5 storage array for our server. RAID 5 uses block level striping, distributed parity for fault tolerance, and requires at least three drives.

Setting up a RAID 5 array, in some ways, is easier to understand than setting up a union file system, as it does not include the mounting of the component drives.

RAID 5 has its own share of advantages and disadvantages. And while it has fault tolerance, it should in no way be construed to replace regular backups.

It should also be noted that setting up other versions of RAID will follow a similar procedure. The mdadm command used here can also setup RAID 0, RAID 1, RAID 4, RAID6, and RAID 10. In addition there are several non raid configurations; Linear, Multipath, Faulty, and Container.

Note that this is a command line, not GUI, install! Also you will need sudo/root access for some commands used in this post.

RAID 5 Warnings

The following is presented in the interest of full disclosure. It is not meant as a recommendation. Rather, all drive configurations have their issues and problems that one should be aware of.

Some manufactures have strongly suggested not using RAID 5 for any critical data needs. From my observation this is a general result of the way RAID 5 has been implemented, specifically with using drives of similar size, type, and manufacturing date. This means, assuming even usage, all the drives will reach End Of Life (EOL) around the same time. And when a drive fails, rebuilding the array with a new drive can involve lots of stress on the remaining drives, increasing the likelihood of a second drive failure destroying all data in the array.

While the likelihood of the above scenario is small, it can be further reduced by using similar size drives of different manufacture and date. For consumer use, with regular backups, this should not be a likely problem.

Similar size drives are generally used in RAID 5 arrays. This is not an actual requirement. However since RAID 5 stripes data, the available space on all drives will be limited by the smallest drive, meaning that the remaining space on larger drives will remain unused.

RAID 5 is not recommended for Solid State Drives (SSD). This is due to the way data and parity are written causing larger numbers of write cycles, wearing out the SSD faster.

Setup

If you are still reading, then I have not scared you off from trying a RAID 5 setup. So lets dive in.

I am assuming you have a computer setup with the requisite number of drives. Specifically a drive for your Operating System (OS), which can be of any type. And three or more additional drives of similar size and performance. The drives do not have to be identical or even from the same manufacture. But their characteristics should be reasonably similar.

The actual available storage of a RAID 5 array can be calculated buy multiplying the size of the smallest drives by the number of drives minus 1. For example assume a RAID 5 array with three drive array with a 256 GB drive, a 300GB drive, and a 512 GB drive. The actual storage size would be 256 GB (smallest drive) times 2 (three drives minus 1) or 512 GB.

As with posts in this series, we are assuming a starting point of a base Debian 12 install. Please review previous posts for information on related subjects like remote access and virtualization.

Procedure

RAID software installed: Make sure the mdadm program has been installed, and confirm by checking version number.

apt update && apt upgrade
apt install mdadm
mdadm -V

Verify disks are detected: There are several ways to do this. Using the lsblk command is probably the easiest way to view a list of all detected drives. If they are standard mechanical hard drives on a SATA bus, they will be listed as sda, sdb, sdc, etc. If they are m.2 nvme they may be listed as nvme0, nvme1, nvme2, etc.

Using fdisk -l will also show detected drives with all sorts of additional information.

An alternate way to list drives is to list the content of the /dev directory and then filter with grep. For example (assuming sd type designations).

ls /dev | grep sd

Create RAID 5 Array: Once we have validated the disks are connected, and we know their designations, we can create the RAID 5 array. For our example we will assume our main drive is sda, and our three additional drives are sdb, sdc, and sdd.

mdadm --create --verbose /dev/md0 \
      --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

The mdadm command format is “mdadm <mode> raid-devcie <options> components”. In our command the modes are create and verbose. The raid device is /dev/md0. The backslash separates our command into two lines (it would normally be just one line). Our options are level set to 5 (RAID 5), and raid devices is set to 3, followed by those devices.

Check Status of RAID: We do not want to do anything with the RAID array until it is fully created. To check the status we need to review the content of the /proc/mdstat directory

cat /proc/mdstate

This command should output four to five lines. If there is a progress bar at the bottom, the array is not completed yet. The second line should give our array name (md0), and the components (sdb, sdc, and sdd). You will want to run this command a few times until the progress bar is no longer displayed before moving on.

You should also be able to find your RAID 5 device in the /dev folder (/dev/md0 in this case).

Format RAID 5 Array: For this example, I want to use mkfs command to format the array with aext4 file system. Of course you are free to use whatever file system you want, as there is no requirement for a specific file system type. Some file systems, by their nature are not good choices (like xfs for example).

mkfs.ext4 /dev/md0

Create Mount Point: We need a place to mount or RAID 5 array. Where the file system is mounted is totally at your discretion. The only requirement is that a directory exists. I will be using a folder I make in the /srv directory. I suggest simplicity, not using any spaces or characters that require escapes.

mkdir /srv/raid5

Mount RAID 5 Array: We can now mount the array and test it.

mount /dev/md0 /srv/raid5

At this point you can check with the df command. Using the -h option should show your array as a active part of the fiel system, along with the correct size. You can also create a a few files and folders.

df -h

Mount the RAID 5 Array at boot: This involves three steps. We need to update the mdadm config file, the fstab file, and do an initramfs (This may or may not be necessary depending on usage, but it will make the array available during the boot sequence).

To update the /etc/mdadm/mdadm.conf file, we need to direct ther output of the mdadm command to this file.

mdadm --detail --scan > /etc/mdadm/mdadm.conf

Next we need to open the /etc/fstab file in our favorite text editor and add a line. The line should contain space separated values for the device we created, the mount point, the file system, some options, 0 for dump frequency, and 0 for fsck.

/dev/md0  /srv/raid5  ext4  defaults,nofail  0  0

Note that the defaults option in fstab means rw,suid,dev,exec,auto,nouser,async. Depending on usage you may want to be a bit more specific in this field.

Update initramfs: initramfs is a root file system image used when booting the kernel. To update the file and provide early awareness of the RAID 5 array, use the following command (the -u option updates an existing file).

update-initramfs -u

Conclusion

In this post we walked thought the steps for manually creating a RAID5 device out of multiple similar hard drives, then mounting to our system and making it available at boot. Running through this process will give us a good idea of the process.

In the next post I will attempt to do the same thing through the cockpit web GUI (see this post).

Hodgepodge 3xNAS Part 1 Project Overview

Hodgepodge 3xNAS Part 2 Software Choices

Hodgepodge 3xNAS Part 3 Virtual Install

Hodgepodge 3xNAS Part 4 Initial Configuration

Hodgepodge 3xNAS Part 5 Need a GUI?

Hodgepodge 3xNAS Part 6 Add a Storage Drive

Hodgepodge 3xNAS Part 7 SMB/CIFS

Hodgepodge 3xNAS Part 8 Expanded Storage

Hodgepodge 3xNAS Part 9 Making RAID

Hodgepodge 3xNAS Part 10 Cockpit Web GUI RAID 5

Hodgepodge 3xNAS Part 11 Mergerfs

Hodgepodge 3xNAS Part 12 Snapraid

Hodgepodge 3xNAS Part 13 LVM

Hodgepodge 3xNAS Part 14 The Server Hardware

Hodgepodge 3xNAS Part 15 The Server Operating System

Hodgepodge 3xNAS Part 16 Cockpit Install

Hodgepodge 3xNAS Part 17 SAMBA Setup

Hodgepodge 3xNAS Part 18 PLEX vs Kodi

Add New Comment

Your email address will not be published. Required fields are marked *