Introduction
Proxmox VE 8.x introduces several new major features. You should plan the upgrade carefully, make and verify backups before beginning, and test extensively. Depending on the existing configuration, several manual steps—including some downtime—may be required.
Note: A valid and tested backup is always required before starting the upgrade process. Test the backup beforehand in a test lab setup.
In case the system is customized and/or uses additional packages or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Bookworm.
In general, there are two ways to upgrade a Proxmox VE 7.x system to Proxmox VE 8.x:
- A new installation on new hardware (restoring VMs from the backup)
- An in-place upgrade via apt (step-by-step)
New installation
- Backup all VMs and containers to an external storage (see Backup and Restore).
- Backup all files in /etc required: files in /etc/pve, as well as /etc/passwd , /etc/network/interfaces , /etc/resolv.conf , and anything that deviates from a default installation.
- Install latest Proxmox VE 8.x from the ISO (this will delete all data on the existing host).
- Empty the browser cache and/or force-reload ( CTRL + SHIFT + R , or for MacOS ⌘ + Alt + R ) the Web UI.
- Rebuild your cluster, if applicable.
- Restore the file /etc/pve/storage.cfg (this will make the external storage used for backup available).
- Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes//host.fw (if applicable).
- Restore all VMs from backups (see Backup and Restore).
Administrators comfortable with the command line can follow the procedure Bypassing backup and restore when upgrading, if all VMs/CTs are on a single shared storage.
Breaking Changes
In-place upgrade
In-place upgrades are carried out via apt. Familiarity with apt is required to proceed with this upgrade method.
Prerequisites
- Upgraded to the latest version of Proxmox VE 7.4 on all nodes. Ensure your node(s) have correct package repository configuration (web UI, Node -> Repositories) if your pve-manager version isn’t at least 7.4-15 .
- Hyper-converged Ceph: upgrade any Ceph Octopus or Ceph Pacific cluster to Ceph 17.2 Quincy before you start the Proxmox VE upgrade to 8.0. Follow the guide Ceph Octopus to Pacific and Ceph Pacific to Quincy, respectively.
- Co-installed Proxmox Backup Server: see the Proxmox Backup Server 2 to 3 upgrade how-to
- Reliable access to the node. It’s recommended to have access over a host independent channel like iKVM/IPMI or physical access. If only SSH is available we recommend testing the upgrade on an identical, but non-production machine first.
- A healthy cluster
- Valid and tested backup of all VMs and CTs (in case something goes wrong)
- At least 5 GB free disk space on the root mount point.
- Check known upgrade issues
Testing the Upgrade
An upgrade test can be easily performed using a standalone server. Install the Proxmox VE 7.4 ISO on some test hardware, then upgrade this installation to the latest minor version of Proxmox VE 7.4 (see Package repositories). To replicate the production setup as closely as possible, copy or create all relevant configurations to the test machine, then start the upgrade. It is also possible to install Proxmox VE 7.4 in a VM and test the upgrade in this environment.
Actions step-by-step
The following actions need to be carried out from the command line of each Proxmox VE node in your cluster
Perform the actions via console or ssh; preferably via console to avoid interrupted ssh connections. Do not carry out the upgrade when connected via the virtual console offered by the GUI; as this will get interrupted during the upgrade.
Remember to ensure that a valid backup of all VMs and CTs has been created before proceeding.
Continuously use the pve7to8 checklist script
A small checklist program named pve7to8 is included in the latest Proxmox VE 7.4 packages. The program will provide hints and warnings about potential issues before, during and after the upgrade process. You can call it by executing:
pve7to8
To run it with all checks enabled, execute:
pve7to8 --full
Make sure to run the full checks at least once before the upgrade.
This script only checks and reports things. By default, no changes to the system are made and thus, none of the issues will be automatically fixed. You should keep in mind that Proxmox VE can be heavily customized, so the script may not recognize all the possible problems with a particular setup!
It is recommended to re-run the script after each attempt to fix an issue. This ensures that the actions taken actually fixed the respective warning.
Move important Virtual Machines and Containers
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is being upgraded.
Migration compatibility rules to keep in mind when planning your cluster upgrade:
- A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work.
- A migration from a newer Proxmox VE version to an older version may work, but is generally not supported.
Update the configured APT repositories
First, make sure that the system is using the latest Proxmox VE 7.4 packages:
apt update apt dist-upgrade pveversion
The last command should report at least 7.4-15 or newer.
Update Debian Base Repositories to Bookworm
Update all Debian and Proxmox VE repository entries to Bookworm.
sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
Ensure that there are no remaining Debian Bullseye specific repositories left, if you can use the # symbol at the start of the respective line to comment these repositories out. Check all files in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list and see Package_Repositories for the correct Proxmox VE 8 / Debian Bookworm repositories.
Add the Proxmox VE 8 Package Repository
echo "deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list
For the no-subscription repository, see Package Repositories. Rather than commenting out/removing the PVE 7.x repositories, as was previously mentioned, you could also run the following command to update to the Proxmox VE 8 repositories:
sed -i -e 's/bullseye/bookworm/g' /etc/apt/sources.list.d/pve-install-repo.list
Update the Ceph Package Repository
Note: For hyper-converged ceph setups only, check the ceph panel and configured repositories in the Web UI of this node, if unsure.
Replace any ceph.com repositories with proxmox.com ceph repositories.
NOTE: At this point a hyper-converged Ceph cluster installed directly in Proxmox VE must run Ceph 17.2 Quincy, if not you need to upgrade Ceph first before upgrading to Proxmox VE 8 on Debian 12 Bookworm! You can check the current ceph version in the Ceph panel of each node in the Web UI of Proxmox VE.
With Proxmox VE 8 there also exists an enterprise repository for ceph, providing the best choice for production setups.
echo "deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise" > /etc/apt/sources.list.d/ceph.list
If updating fails with a 401 error, you might need to refresh the subscription first to ensure new access to ceph is granted, do this via the Web UI or pvesubscription update —force .
If you do not have any subscription you can use the no-subscription repository:
echo "deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription" > /etc/apt/sources.list.d/ceph.list
If there is a backports line, remove it — the upgrade has not been tested with packages from the backports repository installed.
Refresh Package Index
Update the repositories’ package index:
apt update
Upgrade the system to Debian Bookworm and Proxmox VE 8.0
Note that the time required for finishing this step heavily depends on the system’s performance, especially the root filesystem’s IOPS and bandwidth. A slow spinner can take up to 60 minutes or more, while for a high-performance server with SSD storage, the dist-upgrade can be finished in under 5 minutes.
Start with this step, to get the initial set of upgraded packages:
apt dist-upgrade
During the above step, you will be asked to approve changes to configuration files, where the default config has been updated by their respective package.
It’s suggested to check the difference for each file in question and choose the answer accordingly to what’s most appropriate for your setup.
Common configuration files with changes, and the recommended choices are:
- /etc/issue -> Proxmox VE will auto-generate this file on boot, and it has only cosmetic effects on the login console. Using the default «No» (keep your currently-installed version) is safe here.
- /etc/lvm/lvm.conf -> Changes relevant for Proxmox VE will be updated, and a newer config version might be useful. If you did not make extra changes yourself and are unsure it’s suggested to choose «Yes» (install the package maintainer’s version) here.
- /etc/ssh/sshd_config -> If you have not changed this file manually, the only differences should be a replacement of ChallengeResponseAuthentication no with KbdInteractiveAuthentication no and some irrelevant changes in comments (lines starting with # ). If this is the case, both options are safe, though we would recommend installing the package maintainer’s version in order to move away from the deprecated ChallengeResponseAuthentication option. If there are other changes, we suggest to inspect them closely and decide accordingly.
- /etc/default/grub -> Here you may want to take special care, as this is normally only asked for if you changed it manually, e.g., for adding some kernel command line option. It’s recommended to check the difference for any relevant change, note that changes in comments (lines starting with # ) are not relevant. If unsure, we suggested to selected «No» (keep your currently-installed version)
Check Result & Reboot Into Updated Kernel
If the dist-upgrade command exits successfully, you can re-check the pve7to8 checker script and reboot the system in order to use the new Proxmox VE kernel.
Please note that you should reboot even if you already used the 6.2 kernel previously, through the opt-in package on Proxmox VE 7. This is required to guarantee the best compatibility with the rest of the system, as the updated kernel was (re-)build with the newer Proxmox VE 8 compiler and ABI versions.
After the Proxmox VE upgrade
Empty the browser cache and/or force-reload ( CTRL + SHIFT + R , or for MacOS ⌘ + Alt + R ) the Web UI.
For Clusters
- Check that all nodes are up and running on the latest package versions. If not, continue the upgrade on the next node, start over at #Preconditions
Checklist issues
proxmox-ve package is too old
Check the configured package repository entries; they still need to be for Proxmox VE 7.x and Bullseye at this step (see Package_Repositories). Then run
apt update
apt dist-upgrade
to get the latest Proxmox VE 7.x packages before upgrading to PVE 8.x
Known Upgrade Issues
General
As a Debian based distribution, Proxmox VE is affected by most issues and changes affecting Debian. Thus, ensure that you read the upgrade specific issues for Debian Bookworm, for example the transition from classic NTP to NTPsec
Please also check the known issue list from the Proxmox VE 8.0 changelog: https://pve.proxmox.com/wiki/Roadmap#8.0-known-issues
Upgrade wants to remove package ‘proxmox-ve’
If you have installed Proxmox VE on top of a plain Debian Bullseye (without using the Proxmox VE ISO), you may have installed the package ‘linux-image-amd64’, which conflicts with current 7.x setups. To solve this, you have to remove this package with
apt remove linux-image-amd64
before the dist-upgrade.
Third-party Storage Plugins
If you use any external storage plugin you need to wait until the plugin author adapted it for Proxmox VE 8.0.
Older Hardware and New 6.2 Kernel and Other Software
Compatibility of old hardware (released >= 10 years ago) is not as thoroughly tested as more recent hardware. For old hardware we highly recommend testing compatibility of Proxmox VE 8 with identical (or at least similar) hardware before upgrading any production machines.
Ceph has been reported to run into «illegal instruction» errors with at least AMD Opteron 2427 (released in 2009) and AMD Turion II Neo N54L (released in 2010) CPUs.
We will expand this section with potential pitfalls and workarounds once they arise.
6.2 Kernels regressed KSM performance on multi-socket NUMA systems
Kernels based on 6.2 have a degraded Kernel Samepage Merging (KSM) performance on multi-socket NUMA systems, depending on the workload this can result in a significant amount of memory that is not deduplicated anymore. This issue went unnoticed for a few kernel releases, making a clean backport of the fixes made for 6.5 hard to do without some general fall-out.
Until a targeted fix for the upstream LTS 6.1 kernel is found, the current recommendation is to keep your multi-socket NUMA systems that rely on KSM on Proxmox VE 7 with it’s 5.15 based kernel. We plan to change the default kernel to a 6.5 based kernel in 2023’Q4, which will also resolve this issue.
GRUB Might Fail To Boot From LVM in UEFI Mode
Due to a bug in grub in PVE 7 and before, grub may fail to boot from LVM with an error message disk `lvmid/. ` not found . When booting in UEFI mode, you need to ensure that the new grub version containing the fix is indeed used for booting the system.
Systems with Root on ZFS and systems booting in legacy mode are not affected.
On systems booting in EFI mode with root on LVM, install the correct grub meta-package with:
[ -d /sys/firmware/efi ] && apt install grub-efi-amd64
VM Live-Migration
VM Live-Migration with different host CPUs
Live migration between nodes with different CPU models and especially different vendors can cause problems, such as VMs becoming unresponsive and causing high CPU utilization.
We recommend testing live migration with a non-production VM first when upgrading. For this reason, we highly encourage using homogenous setups in clusters that use live migration.
VM Live-Migration with Intel Skylake (or newer) CPUs
Previous 6.2 kernels had problems with incoming live migrations when all of the following were true:
- VM has a restricted CPU type (e.g., qemu64 ) – using CPU type host or Skylake-Server is ok.
- the source host uses an Intel CPU from Skylake Server, Tiger Lake Desktop, or equivalent newer generation.
- the source host is booted with a kernel version 5.15 (or older) (e.g. when upgrading from Proxmox VE 7.4)
In this case, the VM could hang after migration and use 100% of one or more vCPUs.
This was fixed with pve-kernel-6.2.16-4-pve in version 6.2.16-5 . So make sure your target host is booted with this (or a newer) kernel version if the above points apply to your setup.
Network
Network Interface Name Change
Due to the new kernel recognizing more features of some hardware, like for example virtual functions, and interface naming often derives from the PCI(e) address, some NICs may change their name, in which case the network configuration needs to be adapted.
In general, it’s recommended to either have an independent remote connection to the Proxmox VE’s host console, for example, through IPMI or iKVM, or physical access for managing the server even when its own network doesn’t come up after a major upgrade or network change.
Network Setup Hangs on Boot Due to NTPsec Hook
If both ntpsec and ntpsec-ntpdate are installed, the network will fail to come up cleanly on boot and hang, but will work if triggered manually (e.g., using ifreload -a ). Even if the two packages are not already present before the upgrade, they will be installed during upgrade if both ntp and ntpdate are present before the upgrade.
Since the chrony NTP daemon is used as default for new installations since Proxmox VE 7.0 the simplest solution might be switching to that via apt install chrony . If this is not possible, it suffices to keep ntpsec but uninstall ntpsec-ntpdate (according to its package description, that package is not necessary if ntpsec is installed). If the host is already hanging during boot, one quick workaround is to boot into recovery mode, enter the root password, run chmod -x /etc/network/if-up.d/ntpsec-ntpdate and reboot.
The root cause for the hang is that the script /etc/network/if-up.d/ntpsec-ntpdate installed by ntpsec-ntpdate causes ifupdown2 to hang during boot if ntpsec is installed. For more information, see bug #5009.
cgroup V1 Deprecation
Reminder, since the previous major release Proxmox VE 7.0, the default is a pure cgroupv2 environment. While Proxmox VE 8 did not change in this regard, we’d like to note that Proxmox VE 8 will be the last release series that supports booting into the old «hybrid» cgroup system, e.g. for compatibility with ancient Container OS.
That means that Containers running systemd version 230 (released in 2016) or older won’t be supported at all in the next major release (Proxmox VE 9, ~ 2025 Q2/Q3). If you still run such container (e.g., CentOS 7 or Ubuntu 16.04), please use the Proxmox VE 8 release cycle as time window to migrate to newer, still supported versions of the respective Container OS.
NVIDIA vGPU Compatibility
If you are using NVIDIA’s GRID/vGPU technology, its driver must be compatible with the kernel you are using. Make sure you use at least GRID version 16.0 (driver version 535.54.06 — current as of July 2023) on the host before upgrading, since older versions (e.g. 15.x) are not compatible with kernel versions >= 6.0 and Proxmox VE 8.0 ships with at least 6.2.
Systemd-boot (for ZFS on root and UEFI systems only)
Systems booting via UEFI from a ZFS on root setup should install the systemd-boot package after the upgrade. You will get a Warning from the pve7to8 script after the upgrade if your system is affected — in all other cases you can safely ignore this point.
The systemd-boot was split out from the systemd package for Debian Bookworm based releases. It won’t get installed automatically upon upgrade from Proxmox VE 7.4 as it can cause trouble on systems not booting from UEFI with ZFS on root setup by the Proxmox VE installer.
Systems which have ZFS on root and boot in UEFI mode will need to manually install it if they need to initialize a new ESP (see the output of proxmox-boot-tool status and the relevant documentation).
Note that the system remains bootable even without the package installed.
It is not recommended installing systemd-boot on systems which don’t need it, as it would replace grub as bootloader in its postinst script.
Troubleshooting
Failing upgrade to «bookworm»
Make sure that the repository configuration for Bookworm is correct.
If there was a network failure and the upgrade was only partially completed, try to repair the situation with
apt -f install
If you see the following message:
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
then one or more of the currently existing packages cannot be upgraded since the proper Bookworm repository is not configured.
Check which of the previously used repositories (i.e. for Bullseye) do not exist for Bookworm or have not been upgraded to Bullseye ones.
If a corresponding Bookworm repository exists, upgrade the configuration (see also special remark for Ceph).
If an upgrade is not possible, configure all repositories as they were before the upgrade attempt, then run:
apt update
again. Then remove all packages which are currently installed from that repository. Following this, start the upgrade procedure again.
Unable to boot due to grub failure
If your system was installed on ZFS using legacy BIOS boot before the Proxmox VE 6.4 ISO, incompatibilities between the ZFS implementation in grub and newer ZFS versions can lead to a broken boot. Check the article on switching to proxmox-boot-tool ZFS: Switch Legacy-Boot to Proxmox Boot Tool for more details.
Introduction
Proxmox VE 7.x introduces several new major features. You should plan the upgrade carefully, make and verify backups before beginning, and test extensively. Depending on the existing configuration, several manual steps—including some downtime—may be required.
Note: A valid and tested backup is always required before starting the upgrade process. Test the backup beforehand in a test lab setup.
In case the system is customized and/or uses additional packages or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Bullseye.
In general, there are two ways to upgrade a Proxmox VE 6.x system to Proxmox VE 7.x:
- A new installation on new hardware (restoring VMs from the backup)
- An in-place upgrade via apt (step-by-step)
In both cases, emptying the browser cache and reloading the GUI are required after the upgrade.
Breaking Changes
New installation
- Backup all VMs and containers to an external storage (see Backup and Restore).
- Backup all files in /etc (required: files in /etc/pve, as well as /etc/passwd, /etc/network/interfaces, /etc/resolv.conf, and anything that deviates from a default installation).
- Install Proxmox VE 7.x from the ISO (this will delete all data on the existing host).
- Rebuild your cluster, if applicable.
- Restore the file /etc/pve/storage.cfg (this will make the external storage used for backup available).
- Restore firewall configs /etc/pve/firewall/ and /etc/pve/nodes//host.fw (if applicable).
- Restore all VMs from backups (see Backup and Restore).
Administrators comfortable with the command line can follow the procedure Bypassing backup and restore when upgrading, if all VMs/CTs are on a single shared storage.
In-place upgrade
In-place upgrades are carried out via apt. Familiarity with apt is required to proceed with this upgrade method.
Preconditions
- Upgraded to the latest version of Proxmox VE 6.4 (check correct package repository configuration)
- Hyper-converged Ceph: upgrade the Ceph Nautilus cluster to Ceph 15.2 Octopus before you start the Proxmox VE upgrade to 7.0. Follow the guide Ceph Nautilus to Octopus
- Co-installed Proxmox Backup Server: see the Proxmox Backup Server 1.1 to 2.x upgrade how-to
- Reliable access to the node. It’s recommended to have access over an host independent channel like iKVM/IPMI or physical access. If only SSH is available we recommend testing the upgrade on a identical, but non-production machine first.
- A healthy cluster
- Valid and tested backup of all VMs and CTs (in case something goes wrong)
- At least 4 GiB free disk space on the root mount point.
- Check known upgrade issues
Testing the Upgrade
An upgrade test can be easily performed using a standalone server. Install the Proxmox VE 6.4 ISO on some test hardware, then upgrade this installation to the latest minor version of Proxmox VE 6.4 (see Package repositories). To replicate the production setup as closely as possible, copy or create all relevant configurations to the test machine, then start the upgrade. It is also possible to install Proxmox VE 6.4 in a VM and test the upgrade in this environment.
Actions step-by-step
The following actions need to be carried out from the command line of each Proxmox VE node in your cluster
Perform the actions via console or ssh; preferably via console to avoid interrupted ssh connections. Do not carry out the upgrade when connected via the virtual console offered by the GUI; as this will get interrupted during the upgrade.
Remember to ensure that a valid backup of all VMs and CTs has been created before proceeding.
Continuously use the pve6to7 checklist script
A small checklist program named pve6to7 is included in the latest Proxmox VE 6.4 packages. The program will provide hints and warnings about potential issues before, during and after the upgrade process. You can call it by executing:
pve6to7
To run it with all checks enabled, execute:
pve6to7 --full
Make sure to run the full checks at least once before the upgrade.
This script only checks and reports things. By default, no changes to the system are made and thus, none of the issues will be automatically fixed. You should keep in mind that Proxmox VE can be heavily customized, so the script may not recognize all the possible problems with a particular setup!
It is recommended to re-run the script after each attempt to fix an issue. This ensures that the actions taken actually fixed the respective warning.
Move important Virtual Machines and Containers
If any VMs and CTs need to keep running for the duration of the upgrade, migrate them away from the node that is being upgraded. A migration of a VM or CT from an older version of Proxmox VE to a newer version will always work. A migration from a newer Proxmox VE version to an older version may work, but is generally not supported. Keep this in mind when planning your cluster upgrade.
Check Linux Network Bridge MAC
With Proxmox VE 7, the MAC address of the Linux bridge itself may change, as noted in Upgrade from 6.x to 7.0#Linux Bridge MAC-Address Change.
In hosted setups, the MAC address of a host is often restricted, to avoid spoofing by other hosts.
Solution A: Use ifupdown2
The ifupdown2 package, which Proxmox ships in the Proxmox VE 7.x repository, was adapted with a new policy configuration, so that it always derives the MAC address from the bridge port.
If you’re already using ifupdown2 with Proxmox VE 6.4, and you upgrade to Proxmox VE 7.x, the ifupdown2 version 3.1.0-1+pmx1 (or newer) will ensure that you do not need to adapt anything else.
Solution B: Hardcode MAC Address
You can either tell your hosting provider the new (additional) bridge MAC address of your Proxmox VE host, or you need to explicitly configure the bridge to keep using the old MAC address.
You can get the MAC address of all network devices, using the command ip -c link . Then, edit your network configuration at /etc/network/interfaces , adding a hwaddress MAC line to the respective bridge section.
For example, by default, the main bridge is called vmbr0 , so the change would look like:
auto vmbr0 iface vmbr0 inet static address 192.168.X.Y/24 hwaddress aa:bb:cc:12:34 # . remaining options
If ifupdown2 is installed, you can use ifreload -a to apply this change. For the legacy ifupdown, ifreload is not available, so you either need to reboot or use ifdown vmbr0; ifup vmbr0 (enter both semi-colon separated commands in one go!).
Note, hard-coding the MAC requires manual adaptions, if you ever change your physical NIC.
Update the configured APT repositories
First, make sure that the system is using the latest Proxmox VE 6.4 packages:
apt update apt dist-upgrade
Update all Debian repository entries to Bullseye.
sed -i 's/buster\/updates/bullseye-security/g;s/buster/bullseye/g' /etc/apt/sources.list
Note that Debian changed its security update repo from deb http://security.debian.org buster/updates main to deb http://security.debian.org bullseye-security main for the sake of consistency. The above command accounts for that change already.
You must also disable all Proxmox VE 6.x repositories, including the pve-enterprise repository, the pve-no-subscription repository and the pvetest repository. Use the # symbol to comment out these repositories in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list files. See Package_Repositories
Add the Proxmox VE 7 Package Repository
echo "deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list
For the no-subscription repository, see Package Repositories. Rather than commenting out/removing the PVE 6.x repositories, as was previously mentioned, you could also run the following command to update to the Proxmox VE 7 repositories:
sed -i -e 's/buster/bullseye/g' /etc/apt/sources.list.d/pve-install-repo.list
(Ceph only) Replace ceph.com repositories with proxmox.com ceph repositories
echo "deb http://download.proxmox.com/debian/ceph-octopus bullseye main" > /etc/apt/sources.list.d/ceph.list
If there is a backports line, remove it — the upgrade has not been tested with packages from the backports repository installed.
Update the repositories’ data:
apt update
Upgrade the system to Debian Bullseye and Proxmox VE 7.0
Note that the time required for finishing this step heavily depends on the system’s performance, especially the root filesystem’s IOPS and bandwidth. A slow spinner can take up to 60 minutes or more, while for a high-performance server with SSD storage, the dist-upgrade can be finished in 5 minutes.
Start with this step, to get the initial set of upgraded packages:
apt dist-upgrade
During the above step, you may be asked to approve some new packages, that want to replace certain configuration files. These are not relevant to the Proxmox VE upgrade, so you can choose what’s most appropriate for your setup.
If the command exits successfully, you can reboot the system in order to use the new PVE kernel.
After the Proxmox VE upgrade
For Clusters
- Check that all nodes are up and running on the latest package versions.
For Hyper-converged Ceph
Now you can upgrade the Ceph cluster to the Pacific release, following the article Ceph Octopus to Pacific. Note that while an upgrade is recommended, it’s not strictly necessary. Ceph Octopus will be supported until its end-of-life (circa end of 2022/Q2) in Proxmox VE 7.x,
Checklist issues
proxmox-ve package is too old
Check the configured package repository entries; they still need to be for Proxmox VE 6.x and buster at this step (see Package_Repositories). Then run
apt update
apt dist-upgrade
to get the latest PVE 6.x packages before upgrading to PVE 7.x
storage: parameter ‘maxfiles’ is deprecated
To change over to the new, more powerful prune retention settings edit the Storage in the web interface. There select the Backup Retention tab, alter any value and submit. Only setting keep-last has the same effect as maxfiles had.
We recommend using the opportunity to configure a new, more flexible retention now. Note: once upgraded to Proxmox VE 7.x you will be able to configure the backup retention more fine-grained per backup job via the web interface.
Custom role: contains permission ‘Pool.Allocate’
After the upgrade of all nodes in the cluster to Proxmox VE 7.x copy the following command, adapt the CustomRoleID and run it as root once on any PVE node:
pveum role modify CustomRoleID --append --privs Pool.Audit
The implications here are that between sometime during the upgrade and the time when above command got executed there’s a short time period where the role has slightly fewer privileges that it has now. Proxmox VE users of that role may thus not see all details if they had access on the resource pools configured through that custom role.
Known upgrade issues
General
As a Debian based distribution, Proxmox VE is affected by most issues and changes affecting Debian. Thus, ensure that you read the upgrade specific issues for bullseye
Please also check the known issue list from the Proxmox VE 7.0 changelog: https://pve.proxmox.com/wiki/Roadmap#7.0-known-issues
Upgrade wants to remove package ‘proxmox-ve’
If you have installed Proxmox VE on top of Debian Buster, you may have installed the package ‘linux-image-amd64’, which conflicts with current 6.x setups. To solve this, you have to remove this package with
apt remove linux-image-amd64
before the dist-upgrade.
No ‘root’ password set
The root account must have a password set (that you remember). If not, the sudo package will be uninstalled during the upgrade, and so you will not be able to log in again as root.
If you used the official Proxmox VE or Debian installer, and you didn’t remove the password after the installation, you are safe.
Third-party Storage Plugins
The external, third-party storage plugin mechanism had a ABI-version bump that reset the ABI-age. This means there was an incompatible breaking change, that external plugins must adapt before being able to get loaded again.
If you use any external storage plugin you need to wait until the plugin author adapted it for Proxmox VE 7.0.
Older Hardware and New 5.15 Kernel
KVM: entry failed, hardware error 0x80000021
Recommended Fix
Update to the 5.15 based kernel package pve-kernel-5.15.39-3-pve with version 5.15.39-3 , or newer, which contain fixes for the underlying issue. The package is available on all repositories.
Background
Note: This issue was resolved, see above for the kernel version with the recommended fix you should upgrade.
With the 5.15 kernel, the two-dimensional paging (TDP) memory management unit (MMU) implementation got activated by default. The new implementation reduces the complexity of mapping the guest OS virtual memory address to the host’s physical memory address and improves performance, especially during live migrations for VMs with a lot of memory and many CPU cores. However, the new TDP MMU feature has been shown to cause regressions on some (mostly) older hardware, likely due to assumptions about when the fallback is required not being met by that HW.
The problem manifests as crashes of the machine with a kernel ( dmesg ) or journalctl log entry with, among others, a line like this:
KVM: entry failed, hardware error 0x80000021
Normally there’s also an assert error message logged from the QEMU process around the same time. Windows VMs are the most commonly affected in the user reports.
The affected models could not get pinpointed exactly, but it seems CPUs launched over 8 years ago are most likely triggering the issue. Note that there are known cases where updating to the latest available firmware (BIOS/EFI) and CPU microcode fixed the regression. Thus, before trying the workaround below, we recommend ensuring that you have the latest firmware and CPU microcode installed.
Old Workaround: Disable tdp_mmu
Note: This should not be necessary anymore, see above for the kernel version with the recommended fix you should upgrade.
The tdp_mmu kvm module option can be used to force disabling the usage of the two-dimensional paging (TDP) MMU.
- You can either add that parameter to the PVE host’s kernel command line as kvm.tdp_mmu=N , see this reference documentation section.
- Alternatively, set the module option using a modprobe config, for example:
To finish applying the workaround, always run update-initramfs -k all -u to update the initramfs for all kernels and then reboot the Proxmox VE host.
You can confirm that the change is active by checking that the output of cat /sys/module/kvm/parameters/tdp_mmu is N .
Network
Linux Bridge MAC-Address Change
With Proxmox VE 7 / Debian Bullseye, a new systemd version is used, that changes how the MAC addresses of Linux network bridge devices are calculated:
MACAddressPolicy=persistent was extended to set MAC addresses based on the device name. Previously addresses were only based on the ID_NET_NAME_* attributes, which meant that interface names would never be generated for virtual devices. Now a persistent address will be generated for most devices, including in particular bridges.
— https://www.freedesktop.org/software/systemd/man/systemd.net-naming-scheme.html#v241
A unique and persistent MAC address is now calculated using the bridge name and the unique machine-id ( /etc/machine-id ), which is generated at install time.
Please either ensure that any ebtable or similar rules that use the previous bridge MAC-Address are updated or configure the desired bridge MAC-Address explicitly, by switching to ifupdown2 and adding hwaddress to the respective entry in /etc/network/interfaces .
Older Virtual Machines with Windows and Static Network
Since QEMU 5.2, first introduced in Proxmox VE 6.4, the way QEMU sets the ACPI-ID for PCI devices changed to conform to standards. This led to some Windows guests loosing their device configuration, as they detect the re-ordered devices as new ones.
Due to this Proxmox VE will now pin the machine-version for windows-based guests to the newest available on guest creation, or the minimum of (5.2, latest-available) for existing one. You can also easily change the machine-version through the web-interface now. See this forum thread for further information.
Note that if you have already upgraded to Proxmox VE 6.4, your system has implemented this change already, so you can ignore it.
Network Interface Name Change
Due to the new kernel recognizing more features of some hardware, like for example virtual functions, and interface naming often derives from the PCI(e) address, some NICs may change their name, in which case the network configuration needs to be adapted.
NIC name changes have been observed at least on the following hardware:
- High-speed Mellanox models. For example, due to newly supported functions, a change from enp33s0f0 to enp33s0f0np0 could occur.
- Broadcom BCM57412 NetXtreme-E 10Gb RDMA Ethernet. For example ens2f0np0 could change to enp101s0f0np0
In general, it’s recommended to either have an independent remote connection to the Proxmox VE’s host console, for example, through IPMI or iKVM, or physical access for managing the server even when its own network doesn’t comes up after a major upgrade or network change.
CGroupV2
Old Container and CGroupv2
Since Proxmox VE 7.0, the default is a pure cgroupv2 environment. Previously a «hybrid» setup was used, where resource control was mainly done in cgroupv1 with an additional cgroupv2 controller which could take over some subsystems via the cgroup_no_v1 kernel command line parameter. (See the kernel parameter documentation for details.)
cgroupv2 support by the container’s OS is needed to run in a pure cgroupv2 environment. Containers running systemd version 231 (released in 2016) or newer support cgroupv2, as do containers that do not use systemd as init system in the first place (e.g., Alpine Linux or Devuan).
CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases, which have a systemd version that is too old to run in a cgroupv2 environment, for details and possible fixes see: https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup_compat
Container HW Pass-Through & CGroupv2
Proxmox VE 7.0 defaults to the pure cgroupv2 environment, as v1 will be slowly sunset in systemd and other tooling. And with that some LXC config keys needs a slightly different syntax, for example for hardware pass-through you need to use lxc.cgroup2.devices.allow (instead of the old lxc.cgroup.devices.allow , note the missing 2 )
Troubleshooting
Failing upgrade to «bullseye»
Make sure that the repository configuration for Bullseye is correct.
If there was a network failure and the upgrade was only partially completed, try to repair the situation with
apt -f install
If you see the following message:
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
then one or more of the currently existing packages cannot be upgraded since the proper Bullseye repository is not configured.
Check which of the previously used repositories (i.e. for Buster) do not exist for Bullseye or have not been upgraded to Bullseye ones.
If a corresponding Bullseye repository exists, upgrade the configuration (see also special remark for Ceph).
If an upgrade is not possible, configure all repositories as they were before the upgrade attempt, then run:
apt update
again. Then remove all packages which are currently installed from that repository. Following this, start the upgrade procedure again.
Unable to boot due to grub failure
If your system was installed on ZFS using legacy BIOS boot before the Proxmox VE 6.4 ISO, incompatibilities between the ZFS implementation in grub and newer ZFS versions can lead to a broken boot. Check the article on switching to proxmox-boot-tool ZFS: Switch Legacy-Boot to Proxmox Boot Tool for more details.
Настройка Proxmox после установки
После установки Proxmox необходимо выполнить несколько важных действий.
Обновите систему до последней версии
В ProxmoxVE по умолчанию подключен платный репозиторий, обновления из которого доступны обладателям платной подписки. Чтобы получить последние обновления виртуализатора без подписки, нужно отключить платный репозиторий и подключить репозиторий без подписки (pve-no-subscription repository). Если этого не сделать, apt будет сообщать об ошибке при обновлении источников пакетов.
1. Зайдите в веб-интерфейс (https://server.ip.address:8006) или подключаемся к серверу по SSH.
Рассмотрим пример веб-интерфейса. Перейдите в консоль:
2. Отредактируйте файл конфигурации apt:
nano /etc/apt/sources.list.d/pve-enterprise.list
В этом файле есть только одна строка. Напишите перед ней символ «#«, чтобы отключить опцию получения обновлений из платного репозитория:
#deb https://enterprise.proxmox.com stretch pve-enterprise
3. Нажмите Ctrl + X для выхода из редактора, ответив «Y» на вопрос системы о сохранении файла.
4. Подключите репозиторий без подписки (pve-no-subscription). Для этого откройте для редактирования файл:
nano /etc/apt/sources.list
5. Добавьте в этот файл строки:
Для ProxmoxVE 7
deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
Для ProxmoxVE 8
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
deb http://security.debian.org/debian-security bookworm-security main contrib
6. Нажмите Ctrl + X для выхода из редактора, ответив «Y» на вопрос системы о сохранении файла.
7. Запустите команду обновления списков пакетов и апгрейд системы:
apt update && apt upgrade -y
8. Перезапустите сервер после завершения обновления
Подключение дополнительного накопителя
Есть несколько вариантов подключения незадействованного накопителя в Proxmox. Один из самых быстрых способов – это подключение накопителя в качестве LVM раздела, на котором можно будет хранить образы жестких дисков ВМ и контейнеров.
1. Проверяем наличие неиспользуемых накопителей:
Виден один неиспользуемый накопитель /dev/sdb. Его можно инициализировать и использовать как LVM раздел.
2. Перейдите в раздел Disks/LVM, нажмите Create: LVM Volume Group, выберите диск и дайте название хранилищу
LVM раздел создан и может быть использован для размещения образов дисков ВМ
Позаботьтесь о безопасности
1. Откройте консоль сервера через веб-интерфейс или SSH.
2. Обновите источники пакетов:
apt update
3. Установите Fail2Ban:
apt install fail2ban
4. Откройте конфигурацию утилиты для редактирования:
nano /etc/fail2ban/jail.conf
5. Измените переменные bantime (количество секунд, в течение которых атакующий будет заблокирован) и maxretry (количество попыток ввода логина/пароля) для каждой отдельной службы.
6. Используйте сочетание клавиш Ctrl + X для выхода из редактора, ответив «Y» на вопрос системы о сохранении файла.
7. Перезапустите службу:
systemctl restart fail2ban
Вы можете проверить состояние утилиты, например, снять статистику блокировки с заблокированных IP-адресов, с которых были предприняты попытки перебора SSH паролей. Эти задачи можно выполнить с помощью одной простой команды:
fail2ban-client -v status sshd
Ответ коммунальной службы будет выглядеть примерно так:
root@hypervisor:~# fail2ban-client -v status sshd
INFO Loading configs for fail2ban under /etc/fail2ban
INFO Loading files: [‘/etc/fail2ban/fail2ban.conf’]
INFO Loading files: [‘/etc/fail2ban/fail2ban.conf’]
INFO Using socket file /var/run/fail2ban/fail2ban.sock
Status for the jail: sshd
|- Filter
| |- Currently failed: 3
| |- Total failed: 4249
| `- File list: /var/log/auth.log
`- Actions
|- Currently banned: 0
|- Total banned: 410
`- Banned IP list:
Аналогичным образом можно закрыть веб-интерфейс от подобных атак, создав соответствующее правило. Пример такого правила для Fail2Ban можно найти в официальном руководстве.
Обновление Proxmox 7 до 8
22 июня 2023 года вышел очередной релиз популярной системы виртуализации Proxmox VE 8.0. Выждав небольшой срок, я решил провести обновление одного из своих серверов. Обычно это не вызывает каких-то особых проблем, потому что процесс проходит штатно, если следовать официальной инструкции. Так как у меня много серверов с Proxmox VE под управлением, решил сделать своё пошаговое руководство по обновлению с 7 на 8-ю версию.
Теоретический курс «Сетевые технологии для системных администраторов» позволит системным администраторам упорядочить и восполнить пробелы в знаниях. Цена очень доступная, есть ознакомительные уроки. Все подробности по ссылке. Можно пройти тест на знание сетей, бесплатно и без регистрации.
Содержание
Что нового в Proxmox VE 8
Полный список нововведений можно посмотреть в Press release. Обновления версий ядра и софта опускаю, отмечаю только новую функциональность бесплатной версии:
- Появилась автоматическая синхронизация пользователей и групп из LDAP хранилищ, в том числе Microsoft AD.
- Новый TUI (text-based user interface, то есть текстовый) интерфейс установщика. Стало как у Debian — два варианта установщика. Текстовый похож на дебиановский. Особо не понимаю, зачем на это тратить ресурсы разработки. Возможно GUI интерфейс в каких-то случаях не работает и спасает TUI.
- Сопоставление физических устройств (PCI и USB) и нод кластера. Можно создать виртуальное устройство, сопоставить его с реальными устройствами на конкретных нодах и добавить это устройство к VM. Теперь она сможет мигрировать только на те ноды, где есть сопоставление нужного устройства. До конца не понял, какую прикладную задачу это решает.
- Автоматическая блокировка учёток юзеров, которые попали на второй фактор аутентификации и не прошли его несколько раз. В общем, защита от брута второго фактора, когда первый пароль утёк злоумышленникам.
- Списки доступа (ACL) к сетевым ресурсам. Можно управлять доступом пользователей, например, к бриджам.
Навскидку нововведений как-то мало. Считай ничего значимого и нет, кроме сопоставления устройств. Это наиболее заметное улучшение функциональности.
Подготовка к обновлению
Если у вас версия Proxmox ниже 7-й, то последовательно обновите систему до последней. У меня есть инструкции на этот счёт:
- Обновление Proxmox 5 до 6
- Обновление Proxmox 6 до 7
Я не буду заниматься самодеятельностью, а выполню то, что указано в официальном руководстве по обновлению — https://pve.proxmox.com/wiki/Upgrade_from_7_to_8, опуская те моменты, что неактуальны в моём случае. Например, я не использую Ceph.
Некоторые замечания перед обновлением:
- Убедитесь, что у вас есть бэкапы всего, что будет затронуто обновлением. Не забудьте проверить, что из них можно выполнить восстановление.
- У вас будет небольшой простой сервисов, так как во время обновления потребуется остановить все виртуальные машины и выполнить перезагрузку гипервизора.
- Варианта обновления Proxmox VE два: обновить текущий гипервизор или выполнить чистую установку новой версии и восстановить на неё бэкапы виртуальных машин.
Если вы хотите перенести всё на новую установку, то помимо бэкапа виртуальных машин, потребуется сохранить настройки в директории /etc/pve и системные файлы /etc/passwd, /etc/network/interfaces, /etc/resolv.conf. После установки новой версии, достаточно венуть эти файлы и восстановить бэкапы виртуальных машин.
Если вы будете обновлять текущую установку Proxmox VE, то проверьте следующие моменты:
- У вас версия Proxmox VE 7.4 на всех нодах кластера.
- Если используете Ceph, предварительно обновите его до версии Ceph 17.2 Quincy, прежде чем начнёте обновлять сам Proxmox. Для этого можно воспользоваться инструкциями: Ceph Octopus to Pacific и Ceph Pacific to Quincy.
- Если и спользуете Proxmox Backup Server, то обновите его до 3-й версии: Proxmox Backup Server 2 to 3 upgrade how-to.
- На всякий случай убедитесь, что у вас есть доступ к консоли сервера, помимо SSH доступа.
- Нужно будет как минимум 5G свободного места на корневом разделе /.
Обновление Proxmox 7 до 8
На всякий случай обновите саму систему:
# apt update && apt dist-upgrade
Запустите скрипт проверки готовности к обновлению:
# pve7to8
Убедитесь, что нет ошибок. Если есть, то их нужно исправить.
После обновления у вас может измениться MAC адрес сетевого бриджа vmbr. Если для вас это недопустимо и приведёт к проблемам, то заранее укажите постоянный MAC. Для начала посмотрите текущие адреса:
# ip -c link
Затем укажите текущий MAC адрес бриджа в конфигурации сетевых интерфейсов /etc/network/interfaces.
auto vmbr0 iface vmbr0 inet static address 10.20.1.2/24 hwaddress ae:9d:46:49:4d:23 # . остальные настройки
Обновите системный файл с репозиториями /etc/apt/sources.list. Так как кодовая база новой версии Proxmox базируется на Debian 12 bookworm, необходимо указать новую версию взамен прошлой bullseye.
# cp /etc/apt/sources.list /etc/apt/sources.list.bak # sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
Проверьте на всякий случай изменения:
# diff /etc/apt/sources.list /etc/apt/sources.list.bak
Обновляем список пакетов:
# apt update
Получится внушительный список. При желании, можете его посмотреть:
# apt list --upgradable
Теперь будем запускать непосредственно обновление пакетов. Тут важно убедиться, что есть доступ к консоли сервера. Если запускаете обновление по SSH, то убедитесь, что ваша сессия не прервётся в случае обрыва связи. Запустите её в screen или tmux. Если обновление прервётся, то есть шанс получить неработающий сервер, хотя и не обязательно это случится. Но в любом случае получите лишние проблемы и хлопоты. Лучше этого избежать.
# apt dist-upgrade
После загрузки всех пакетов, вам приведут подробную информацию по обновлению.
Нужно промотать список вниз и закрыть его, нажав клавишу q. Дальше могут появляться различные вопросы по поводу перезаписи конфигурационных файлов или перезапуска служб. Можно со всем соглашаться и выбирать варианты, которые предлагают по умолчанию.
После окончания обновления пакетов перезагрузите сервер с Proxmox и убедитесь, что обновление прошло успешно.
Ошибки во время обновления Proxmox VE
Очень частой ошибкой во время обновления Proxmox VE, причём не важно, какой версии, является следующая:
Upgrade wants to remove package ‘proxmox-ve’.
Вам пишут, что сейчас будет удалён пакет proxmox-ve. Сразу как-то обновляться не хочется, потому что не понятно, и не очевидно, к чему это приведёт. Подобная ошибка встречается у тех, кто ставил Proxmox не с установочного iso образа, а обновлением Debian.
Решить её можно следующим образом. Удаляем пакет linux-image-amd64 или похожий. Название пакета может немного различаться. К слову image могут быть добавлены какие-то цифры с версией. Точное имя пакета можно посмотреть через общий список пакетов, примерно так:
# dpkg -l | grep linux-image
Суть в том, что у этого пакета, который остался от оригинальной Debian, возникают конфликты зависимостей с proxmox-ve, поэтому последний предлагают удалить. Вместо этого удаляем ненужный пакет.
# apt remove linux-image-amd64
После этого обновление Proxmox VE должно пройти штатно.
Видео
Заключение
Я завершил обновление Proxmox VE до 8-й версии на тестовом гипервизоре. Никаких проблем в процессе не возникло. Тем не менее, не рекомендую обновлять прод, пока не выйдет хотя бы версия 8.1. Торопиться в таких делах нет никакого смысла. Можно вообще не обновляться, если вам не нужны нововведения. Никаких проблем не будет, если вы останетесь на старой версии, пока она еще поддерживается.
Онлайн курcы по Mikrotik
- Знания, ориентированные на практику;
- Реальные ситуации и задачи;
- Лучшее из международных программ.