Skip to main content

Managing Proxmox VE - How to Home Lab Part 2



Comments: counting...

Learn about Proxmox VE software updates, version upgrades, VM backup/restore, and automated rolling VM backups.


Due to the overwhelming excitement I witnessed over the first segment of this series, I’ve decided to make it continuous instead of only 6 parts, that’s just not enough to cover the journey we are on together! I’d like to thank everyone who provided feedback and constructive criticism that helped me improve the inaugural edition of the series in several ways, please keep it up!

I decided to squeeze in this article on maintaining Proxmox before moving on with the next part in the series, I feel it’s an important foundation to lay down for someone new to all of this, we will cover software updates, PVE version upgrades, VM backup and restore, and scheduling backups including how to enable auto-removing of the oldest scheduled backup.

Software Updates

Since we haven’t configured Email alerts yet, you’ll need to check for new updates regularly, I suggest once per week at minimum.

From the Proxmox web interface, click on the name of your server, then click the updates tab, then the refresh button to check for available software updates.

Check for Updates

A window will pop up that shows the progress, it should end with TASK OK, go ahead and close that box.

Task OK

Next, click the upgrade button.

Start Upgrade

A new browser window will open, detailing which updates are going to be installed, and prompting you to continue, you can just press enter here to kick things off.

Pending Updates

When it has finished, you will be returned to the bash shell prompt and it’s safe to close this window out after you check to see if a reboot is required, there will be a message displayed at the end of the update if this is the case, as in the screenshot below.

Reboot Required

If a reboot is required, you can use the reboot button to do so, any running VMs should automatically be cleanly shut down before the host reboots.

Reboot Host

You should also keep up on software updates for each of your VMs, the commands differ for each OS, but on Ubuntu, you should run apt update, followed by apt dist-upgrade -y to fetch and install all available updates.

PVE Version Upgrades

These will likely have new quirks and challenges from version to version, I will run through the upgrade from PVE 5 to 6 with you here, it was pretty simple since it was a basic single-host installation.

I performed this upgrade after reading through and following along with the official Proxmox 5 to 6 upgrade instructions which I recommend you have a look at, there are all manner of special considerations depending on your system configuration.

If you haven’t done so recently, install all of the software updates as above for version 5 before proceeding.

From the web interface, click on the name of your server, then click the shell button.

Open a Shell

Run the command pve5to6, which is a built-in script to check your systems upgrade readiness.

Check Upgrade Readiness

If the results show any warnings or errors, you should look into those on the official upgrade guide linked above, any skipped checks are fine they just aren’t relevant to your installation.

Next, run the command below, sed is a file manipulation tool, we are using it here to find the text stretch and replace it with buster in the file /etc/apt/sources.list. Stretch and Buster are release names for the Debian OS which Proxmox is based upon, and the file /etc/apt/sources.list is where the package manager apt looks to see where to pull software updates from. This change means the next time we install software updates we will also be upgrading the OS version from Stretch to Buster.

sed -i -e 's/stretch/buster/g' /etc/apt/sources.list

Now we will make the same change in the file /etc/apt/sources.list.d/pve-community.list which is specifically for Proxmox software updates.

sed -i -e 's/stretch/buster/g' /etc/apt/sources.list.d/pve-community.list

I highly recommend logging into the server itself (not the web interface) for the remaining steps, I did them from the web interface and it turned out fine, but if you click any of the menu items and leave the shell screen it will end the session in the middle of an OS upgrade which can cause you some problems.

Run the command apt update to fetch all the software updates for the new version, and then the command apt dist-upgrade -y to install them.

You be presented with a notice that you are doing a version upgrade, go ahead and press enter to continue, then you’ll be asked to select your keyboard layout.

The following steps may differ from system to system depending on your configuration, you will need to use your own best judgment for some of these, but the steps from my upgrade below should give you a good idea of how to handle things.

You will likely run into several instances of the message “Configuration file X Modified … package distributor has shipped an updated version.

You’ll want to choose the option “Show the differences between the versions”.

In cases where there are specific changes to a file that need to be made, but also some modifications you’d like to keep, you can choose the options “start a shell to examine the situation”. You can then navigate to the file’s path (/etc in this case), and make a copy of your current file (cp /etc/issue /etc/issue.custom), type exit and hit enter to return to the upgrade screen, and choose the option “install the package maintainer’s version”, after the upgrade you can use the nano command to edit the new file (nano /etc/issue) and use the old file as a reference for your custom settings (cat /etc/issue.custom), you can remove the old file when you are finished (rm /etc/issue.custom).

Configuration File /etc/issue

As you can see in the screenshot below, this change will wipe out the default welcome message displayed on the server after boot, and replace it with Debian GNU/Linux 10 (the legend at the very top two lines tells you that the lines starting with - exist only in the file /etc/issue, and the lines start with + exist only in the file /etc/issue/dpkg-new which is the version shipped with the upgraded package). The file /etc/issue is generally only used for keeping the OS version information in an easy to find place, but Proxmox used it to store the default welcome message.

Press the q key to close this view.

Changes Overview

I already know how to access the web interface and don’t mind losing the default welcome message, so I pressed the y key to install the new file from the package maintainer. In this case, keeping the old version probably wouldn’t hurt anything, so you may choose to keep it if you wish.

The next notice I was shown (below) is asking if I want to allow services to be automatically restarted when software updates have been installed. It is up to you to answer this question, by allowing services to restart you risk crashing a service in some rare cases, and that service will remain down until you notice and address the issue, by choosing no you must manually restart the updated services or dependant services after updates. Since I always test functionality after updates have been installed I chose yes here. (you can run the command systemctl status to get a general overview of how the system is running, specifically check the failed units line, it should have a value of 0).

Automatically Restart Services

Another configuration file discrepancy, this time for the file /etc/ssh/sshd_config, this file is for deciding the behavior of the SSH Daemon, which allows remote access to the server’s command-line shell (SSH = Secure SHell). Looking at the differences in the screenshot below, all that’s changed is the version number and a couple of options have been removed, these options are likely no longer available in the new version (the technical term for this is deprecated), so I’ll install the new version of this file from the package maintainer.

Changes to /etc/ssh/sshd_config

The last file discrepancy I faced was an easy one to solve, the configuration file /etc/apt/sources.list.d/pve-enterprise.list, I got rid of this in favor of the community edition pve-community.list, so I chose the option “keep your currently-installed version” which doesn’t exist because I don’t want this file on the system.

That wasn’t so bad, right? All that’s left to do is reboot the system when the upgrade is finished.

Backup and Restore

Snapshots VS Backups

It is important that you understand the difference between the two, and which method is appropriate for the result you are wanting to achieve, the following explanations are provided in the context of Proxmox specifically.

Backups are a complete representation of a virtual machine, you can take a backup and restore it to a new VM or even a new Proxmox host and spin up a new copy with little to no modification (moving to a new host may require some changes to the network configuration in some cases). You should always have recent backups of critical systems even when you are using a snapshot because backups are superior in versatility, however, they take longer to create and restore from than snapshots.

Snapshots are a representation of a virtual machine’s state rather than the VM in its entirety, that’s an important distinction because you cannot spin up a new VM from a snapshot or move it to a new host using a snapshot. They do however have the advantage of being much faster than a backup to create and restore from.

As a general rule of thumb, if you just want to create a “checkpoint” of a VM before doing a major upgrade or making a risky change, go with a snapshot so you can quickly roll it back if things go awry, and remove the snapshot when you’re finished. For any other use case - just use a backup.

Manual Backup and Restore

Create a Backup

You should have at least one VM right now, so let’s take a backup of that. It is perfectly fine to perform a backup on a VM that is running, no need to shut it down. Drill down to the name of your VM on the left-hand menu and click on it, then click the backup tab, and then click the backup now button. The steps for manually working with backups and snapshots are pretty much identical, so I will omit going over snapshots, you can just follow these same steps using the snapshots tab instead of the backups tab

Start a Backup

In the options menu, the following options are available:

  • Storage: Where the backup will be saved to (there is likely only local available right now)
  • Mode: I always leave this on snapshot, which means the system will take a snapshot before backing up and remove it after the backup is finished, this is so any data that is modified in the VM during backup will be ignored in the backup. Essentially the backup will contain the state that the VM was in when you started the backup, any changes made during the backup will exist only on the running instance and not in the backup.
  • Compression: I always leave this on LZO, GZIP makes for smaller backups that use up less space but this takes a lot more time to complete.
  • Send email to: Since we haven’t set up Email alerts yet, we can just leave this blank, it’s for sending the results of the backup when complete.

When the backup is started, you’ll be presented with a progress screen, it is safe to close this and let the backup run in the background if you wish, you can re-open it by double-clicking the backup task in the bottom section of the web interface. It should finish with TASK OK.

Backup Complete

If you go back to the backup tab you will see your shiny VM backup listed (if you don’t see it, try clicking another tab in the menu and then click back to the backup tab). The filename will always be something like vzdump-qemu-(VM ID)-(Year)_(Month)_(Day)-(Hour)_(Minute)_(Second).vma.lzo

Restore from Backup

From the backup tab of your target VM, click the name of the backup you want to restore, then click the restore button.

Restore a Backup

You are given the option to choose which storage device to use (likely your only option right now is local-lvm), and the read limit for this action. The read limit sets the maximum read speed from the source backup file, and subsequently limits the write speed of the restoration process, on my busy system I use 50 MiB/s to prevent performance issues with any running VMs, but you may want to go higher or lower depending on what type of storage you are using, my storage devices are running on a RAID 10 array so they are a bit faster than a single HDD but slower than an SSD.

Restore to a new VM from Backup

You can also do a restore to a VM other than the one originally backed up. From the web interface, drill down to the storage your backups live on (probably local), click the content tab, then the name of the backup, then the restore button.

Alternate Restore Method

You have two new options this time, VM ID which is just a numerical identifier that is unique for each VM (don’t use values lower than 100 here!), and the unique checkbox. The unique checkbox will randomize certain values like the MAC address so the network will see this VM as a unique system compared to the originally backed up system, I recommend leaving this checked.

Scheduled Backups

The first thing we need to do is figure out how many backups we can store for each VM. To calculate this, we will use the following formula:

(Backup Space Available / Total size of one backup for all VMs) - 1 for extra padding

The - 1 for extra padding is there because when the scheduled backup runs, it will create a new backup before removing the oldest copy, so you need a little extra space for that action.

As an example, if I have 32 GiB of available backup space, and 3 VMs that each take up about 2 GiB per backup, the formula would be:

(32 GiB Available / (3 VMs * 2 GiB each backup)) - 1 (32 / (3*2)) - 1 (32 / 6) - 1 5.33 - 1 = 4.33 Backups

Rounding down, I can store 4 backups for each VM. I am going to set up backups to only save one copy in this tutorial, but I can safely go up to 4 so long as I don’t add more VMs to the backup schedule.

To find the size of each VMs backup, drill down to the name of that VM in the web interface, click the backup tab, and you will find the size of each backup file on the far right, if there are multiple backups use the largest size backup for your calculations.

Now that we have the math stuff out of the way, from the web interface, navigate to datacenter > storage, click on local (or your backup storage if you’ve added extra storage), then click edit.

Edit Storage

Next to max backups, enter the number from the calculation above, or just use 1 like I have if you want the maximum breathing room for future VMs.

Max Backups

Next, navigate to datacenter > backup, and click the add button.

Add Scheduled Backup

Set the target storage and time/days however you like, leave the Send email to field blank since we haven’t set that up yet, and check the box for each VM you want to be backed up on this schedule. You can have as many schedules as you like, just remember that each schedule will need to be modified in the next step to enable removing the older backups automatically. Update: It is not required to modify the backup cron job on most systems. If your scheduled backup fails with an error stating the maximum number of backups has been reached, see the steps at the end of this segment to fix that issue.

Scheduled Backup Settings

That’s all I’ve got for this segment, and I’m already working on part 3 for you! In the meantime I’ll leave you with this awesome list of open-source self-hosted projects that you can run on your home lab, maybe you’ll want to make a list of projects you’d like to set up?

Fix “Max Backups Limit Reached” Error

These steps are only required if your scheduled backup fails with an error stating the maximum number of backups has been reached.

Open up a shell session on your server.

Shell Session

Scheduled backups are saved by Proxmox as cron jobs, and cron is a component of the Linux OS that just runs a command at a specific time and date, it’s a very powerful tool. We need to edit the cron job configuration for each backup schedule that needs to automatically remove older backups, if we don’t do that the backup will fail once the maximum number of backups is reached.

nano /etc/cron.d/vzdump

All you need to do is add --remove 1 to each line right before the --mailnotification setting. It should look something like the screenshot below.

Added --remove 1