Reshaping a Linux SW RAID 5 to RAID 0

Yesterday I finally converted my RAID 5 array of four WD Red 3TB disks to a simple stripe, aka RAID 0. Because, naturally, I have nightly onsite full backups and offsite partial backups of irretrievable data. And I did not want to give away 3TB of capacity just for availability anymore. I’m talking about a home environment here, which is a Raspberry Pi CM4-PCIe-SATA-driven NAS.

I’m a big fan of Linux Software RAID. It always worked wonderfully for my needs. And with mdadm it has a really nice user interface. And so I expected the conversion (“reshaping”) of my RAID 5 array into a RAID 0 to be simple. And I was not disappointed! It’s really just a matter of a single mdadm call. Well, admittedly, done twice. But that’s it. Yes, OK, you really might want to grow your filesystem afterwards, but that’s really out-of-scope when talking about the RAID structure.

So, let’s assume we have a RAID 5 array at /dev/md0, which reports the following state in /proc/mdstat:

md0 : active raid5 sdc1[3] sdb1[6] sdd1[4] sda1[5]
      8788827648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>

Then all you need to do is the following call in order to reshape your array to a RAID 0 structure:

sudo mdadm --grow /dev/md0 --backup-file=reshape5to0 --level=0 --raid-devices=4

The mdadm tool will back some critical data up, an then the kernel tells you:

md: reshape of RAID array md0

Monitoring the process reveals what the SW RAID implementation is really doing:

md0 : active raid5 sdb1[6] sda1[5] sdc1[3] sdd1[4]
      8788827648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
      [>....................]  reshape =  0.9% (27384308/2929609216) finish=1760.9min speed=27468K/sec
      
unused devices: <none>

And if you think about it, yes, that’s exactly what you’d expect. It’s converting the array to a RAID 5 five-disk array in degraded state. That is, effectively a four-disk stripe, and a missing parity disk.

Once the reshape completes (in my case the ~1800 minute estimation was pretty accurate), you end up with:

md0 : active raid5 sdb1[6] sda1[5] sdc1[3] sdd1[4]
      11718436864 blocks super 1.2 level 5, 512k chunk, algorithm 5 [5/4] [UUUU_]
      
unused devices: <none>

The underscore in the disk states is there to make you feel uncomfortable. 😉 Because it’s still a RAID 5, and it’s still in degraded state. And “degraded” sounds a bit worrying. So to feel better (at least from a purely emotional standpoint, after all, a four-disk stripe is nothing that you should ever rely on), just call the above command a second time, now for the actual conversion:

sudo mdadm --grow /dev/md0 --backup-file=convert5to0 --level=0 --raid-devices=4

And now you instantaneously have your RAID 0 stripe:

md0 : active raid0 sdb1[6] sda1[5] sdc1[3] sdd1[4]
      11718436864 blocks super 1.2 512k chunks
      
unused devices: <none>

Technically, of course, it’s just as unsafe and concerning as a RAID 5 with missing parity disk. But, hey, we do have automated regular full backups, don’t we?

And that’s it for the reshape. Of course, your filesystem is still the old size. But, assuming you have an ext2/3/4 filesystem on your md blockdevice, you can simply check and then grow your filesystem with:

sudo e2fsck -f /dev/md0
sudo resize2fs /dev/md0

Again, thanks to the Linux SW RAID developers for providing such a great system!