A tutorial on getting Proxmox VE to run on an MDRAID array.
This configuration is a bit of a challenge in that MDRAID is not officially supported by Proxmox, and therefore is not an option in the installer. That said, I’ve never had an issue running this way.
Note: As of Proxmox VE version 6 there is an option to install the OS on a ZFS RAID Array, that is the recommended method for version 6 and higher. I learned about that shortly after this article was written, but I still feel this article has some value for certain users based on the pros and cons of ZFS, I found this article by Louwrentius to provide a nice overview of the main differences. learn how to set up Proxmox on a ZFS RAID
This guide is for a UEFI installation, if you not running UEFI there will be 2 partitions on each disk instaed of 3 so you’ll need to accomodate for that in the commands.
Step 1 - Install Proxmox
Run through the installer on one hard disk, we will migrate it to a new RAID array after the installation.
Let’s get the OS up to date, install the mdadm
package, and reboot because there was probably a kernel update in there.
apt-get update
apt-get dist-upgrade -y
apt-get install -y mdadm
shutdown -r now
Step 2 - Prepare the second disk
Clone the partition map from the disk that has the OS installed on it (sda
), to the blank disk (sdb
).
Double check that sda
contains your OS installation and sdb
is your empty drive before moving forward, if it that’s not the case you will need to adjust these commands accordingly
sfdisk -d /dev/sda | sfdisk -f /dev/sdb
Now change the type of partition 3 on the new disk to Linux RAID
, using fdisk.
fdisk /dev/sdb
Once here you’ll want to type the following commands in order:
t (Change type)
3 (for partition 3)
29 (should the code be Linux RAID, you can type L to list them all)
w (write changes to disk)
Have the OS kernel rescan the disk for the changes we just made.
partprobe
comes with the parted
program, so you’ll need to install that with the command apt -y install parted
partprobe /dev/sdb
Step 3 - Prepare the array on the second disk
Use MDADM to generate an array on the new disk, we will specify missing
so that we can build the array on one disk for now.
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb3
Create a logical volume on the new RAID array.
pvcreate /dev/md0
Step 4 - Migrate the OS to the second disk
The good news is we can do this online without using a boot disk or anything, the bad news is it takes a while!
Extend the volume group pve
(on the original install disk) to /dev/md0
(our new RAID array).
vgextend pve /dev/md0
Now we can move the data to the RAID array on the second disk.
pvmove /dev/sda3 /dev/md0
Feel free to go grab a coffee now and check back later
While you wait for your data to copy, here are some really great hard drives to keep on hand for your next replacement.
Home Lab Grade:
Enterprise Grade:
Mission Critical Grade:
Step 5 - Add the original OS disk to the array
Now that the OS data has been moved to the second disk, we can add the original disk to the array.
First remove the original disk from the volume group and disable the LVM physical volume.
vgreduce pve /dev/sda3
pvremove /dev/sda3
Add the current RAID configuration to the mdadm.conf
file so the system knows where to find the OS on boot up.
mdadm -Ds >> /etc/mdadm/mdadm.conf
Change the type of partition 3 on sda
just as we did before with sdb
.
fdisk /dev/sda
Once here you’ll want to type the following commands in order:
t (Change type)
3 (for partition 3)
29 (should the code be Linux RAID, you can type L to list them all)
w (write changes to disk)
Have the OS kernel rescan the disk for the changes we just made.
partprobe /dev/sda
Now we can add the first disk to that array we built on the second disk.
mdadm --add /dev/md0 /dev/sda3
You don’t need to wait for the array to sync before moving forward, MDADM will keep track of sync status and resume it after a reboot.
Step 6 - Fix the bootloader
Now we need to edit the grub configuration file to add MDRAID support at boot time, add the following lines to the bottom of their respective files.
/etc/default/grub
GRUB_DISABLE_LINUX_UUID=true
GRUB_PRELOAD_MODULES="raid dmraid"
/etc/modules
raid1
/etc/initramfs-tools/modules
raid1
Now install the grub bootloader to BOTH drives for redundancy.
If you are on a UEFI system, run the command mount /dev/sda2 /boot/efi
before installing grub on sda
, then umount /boot/efi
, dd if=/dev/sda2 of=/dev/sdb2 status=progress
, and mount /dev/sdb2 /boot/efi
before installing grub on sdb
grub-install --recheck /dev/sda
grub-install --recheck /dev/sdb
dpkg-reconfigure grub-pc
Accept the default values through the prompts, choose keep the local version currently installed
when asked about /etc/default/grub
, and add sdb
to the list of install devices using the arrow keys to scroll down to it and space bar to check the box. You will see a bunch of scary looking errors like leaked on invocation
and modules may be missing from the core image
, these are safe to ignore, they are occuring because the system is in an abnormal state right now as it was booted from a single disk but is configured to boot from a RAID array.
Update the bootloader configurations to include the changes we made above for MDRAID support.
update-grub
update-initramfs -u
Now we need to make sure the partition ID of the boot partition and EFI partition match across the two drives, the easiest way is to just clone the whole partitions from the second drive back to the first.
dd if=/dev/sdb1 of=/dev/sda1 status=progress
dd if=/dev/sdb2 of=/dev/sda2 status=progress
Step 7 - Reboot
shutdown -r now
That’s it, you now have Proxmox running on a RAID array, you can check on the status of your array with the command below.
cat /proc/mdstat