Pve Admin Guide - 1
Pve Admin Guide - 1
R ELEASE 6.3
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free
Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with
no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU Free Documentation License".
Proxmox VE Administration Guide iii
Contents
1 Introduction 1
1.1 Central Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Flexible Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Integrated Backup and Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 High Availability Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Flexible Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Integrated Firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Hyper-converged Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7.1 Benefits of a Hyper-Converged Infrastructure (HCI) with Proxmox VE . . . . . . . . . . 4
1.7.2 Hyper-Converged Infrastructure: Storage . . . . . . . . . . . . . . . . . . . . . . . . 5
1.8 Why Open Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.9 Your benefits with Proxmox VE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.10 Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10.1 Proxmox VE Wiki . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10.2 Community Support Forum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10.3 Mailing Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10.4 Commercial Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.10.5 Bug Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.11 Project History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.12 Improving the Proxmox VE Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.13 Translating Proxmox VE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.13.1 Translating with git . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.13.2 Translating without git . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.13.3 Testing the Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.13.4 Sending the Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Proxmox VE Administration Guide iv
2 Installing Proxmox VE 10
2.1 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Minimum Requirements, for Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Recommended System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.3 Simple Performance Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.4 Supported Web Browsers for Accessing the Web Interface . . . . . . . . . . . . . . . 11
2.2 Prepare Installation Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Prepare a USB Flash Drive as Installation Medium . . . . . . . . . . . . . . . . . . . 12
2.2.2 Instructions for GNU/Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 Instructions for macOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4 Instructions for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Using the Proxmox VE Installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Advanced LVM Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3.2 Advanced ZFS Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3.3 ZFS Performance Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4 Install Proxmox VE on Debian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 Cluster Manager 83
5.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2 Preparing Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3 Create a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3.1 Create via Web GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3.2 Create via Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.3.3 Multiple Clusters In Same Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.4 Adding Nodes to the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.4.1 Join Node to Cluster via GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4.2 Join Node to Cluster via Command Line . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4.3 Adding Nodes With Separated Cluster Network . . . . . . . . . . . . . . . . . . . . . 88
Proxmox VE Administration Guide vii
20 Bibliography 327
20.1 Books about Proxmox VE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
20.2 Books about related technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
20.3 Books about related topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Chapter 1
Introduction
Proxmox VE is a platform to run virtual machines and containers. It is based on Debian Linux, and completely
open source. For maximum flexibility, we implemented two virtualization technologies - Kernel-based Virtual
Machine (KVM) and container-based virtualization (LXC).
One main design goal was to make administration as easy as possible. You can use Proxmox VE on a
single node, or assemble a cluster of many nodes. All management tasks can be done using our web-based
management interface, and even a novice user can setup and install Proxmox VE within minutes.
User Tools
qm pvesm pveum ha-manager
Services
pveproxy pvedaemon pvestatd pve-ha-lrm pve-cluster
VM VM
App App App App
While many people start with a single node, Proxmox VE can scale out to a large set of clustered nodes.
The cluster stack is fully integrated and ships with the default installation.
Command Line
For advanced users who are used to the comfort of the Unix shell or Windows Powershell, Proxmox
VE provides a command line interface to manage all the components of your virtual environment. This
command line interface has intelligent tab completion and full documentation in the form of UNIX man
pages.
REST API
Proxmox VE uses a RESTful API. We choose JSON as primary data format, and the whole API is for-
mally defined using JSON Schema. This enables fast and easy integration for third party management
tools like custom hosting environments.
Role-based Administration
You can define granular access for all objects (like VMs, storages, nodes, etc.) by using the role based
user- and permission management. This allows you to define privileges and helps you to control
access to objects. This concept is also known as access control lists: Each permission specifies a
subject (a user or group) and a role (set of privileges) on a specific path.
Authentication Realms
Proxmox VE supports multiple authentication sources like Microsoft Active Directory, LDAP, Linux PAM
standard authentication or the built-in Proxmox VE authentication server.
Proxmox VE Administration Guide 3 / 464
The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or
several local storages or on shared storage like NFS and on SAN. There are no limits, you may configure as
many storage definitions as you like. You can use all storage technologies available for Debian Linux.
One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without
any downtime, as all nodes in the cluster have direct access to VM disk images.
We currently support the following Network storage types:
• iSCSI target
• NFS Share
• CIFS Share
• Ceph RBD
• GlusterFS
• LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
• ZFS
The integrated backup tool (vzdump) creates consistent snapshots of running Containers and KVM guests.
It basically creates an archive of the VM or CT data which includes the VM/CT configuration files.
KVM live backup works for all storage types including VM images on NFS, CIFS, iSCSI LUN, Ceph RBD.
The new backup format is optimized for storing VM backups fast and effective (sparse files, out of order data,
minimized I/O).
A multi-node Proxmox VE HA Cluster enables the definition of highly available virtual servers. The Proxmox
VE HA Cluster is based on proven Linux HA technologies, providing stable and reliable HA services.
Proxmox VE Administration Guide 4 / 464
Proxmox VE uses a bridged networking model. All VMs can share one bridge as if virtual network cables
from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges
are attached to physical network cards and assigned a TCP/IP configuration.
For further flexibility, VLANs (IEEE 802.1q) and network bonding/aggregation are possible. In this way it is
possible to build complex, flexible virtual networks for the Proxmox VE hosts, leveraging the full power of the
Linux network stack.
The integrated firewall allows you to filter network packets on any VM or Container interface. Common sets
of firewall rules can be grouped into “security groups”.
Proxmox VE is a virtualization platform that tightly integrates compute, storage and networking resources,
manages highly available clusters, backup/restore as well as disaster recovery. All components are software-
defined and compatible with one another.
Therefore it is possible to administrate them like a single system via the centralized web management inter-
face. These capabilities make Proxmox VE an ideal choice to deploy and manage an open source hyper-
converged infrastructure.
A hyper-converged infrastructure (HCI) is especially useful for deployments in which a high infrastructure
demand meets a low administration budget, for distributed setups such as remote and branch office environ-
ments or for virtual private and public clouds.
HCI provides the following advantages:
• Scalability: seamless expansion of compute, network and storage devices (i.e. scale up servers and
storage quickly and independently from each other).
• Low cost: Proxmox VE is open source and integrates all components you need such as compute, storage,
networking, backup, and management center. It can replace an expensive compute/storage infrastructure.
• Data protection and efficiency: services such as backup and disaster recovery are integrated.
Proxmox VE has tightly integrated support for deploying a hyper-converged storage infrastructure. You can,
for example, deploy and manage the following two storage technologies by using the Webinterface only:
• ceph: a both, self-healing and self-managing shared, reliable and highly scalable storage system. Check-
out how to manage ceph services on Proxmox VE nodes Chapter 8
• ZFS: a combined file system and logical volume manager with extensive protection against data corruption,
various RAID modes, fast and cheap snapshots - among other features. Find out how to leverage the power
of ZFS on Proxmox VE nodes Section 3.8.
Besides above, Proxmox VE has support to integrate a wide range of additional storage technologies. You
can find out about them in the Storage Manager chapter Chapter 7.
Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux Distribution. The source code of
Proxmox VE is released under the GNU Affero General Public License, version 3. This means that you are
free to inspect the source code at any time or contribute to the project yourself.
At Proxmox we are committed to use open source software whenever possible. Using open source software
guarantees full access to all functionalities - as well as high security and reliability. We think that everybody
should have the right to access the source code of a software to run it, build on it, or submit changes back
to the project. Everybody is encouraged to contribute while Proxmox ensures the product always meets
professional quality criteria.
Open source software also helps to keep your costs low and makes your core infrastructure independent
from a single vendor.
• No vendor lock-in
• Linux kernel
• REST API
The primary source of information is the Proxmox VE Wiki. It combines the reference documentation with
user contributed content.
We always encourage our users to discuss and share their knowledge using the Proxmox VE Community
Forum. The forum is moderated by the Proxmox support team. The large user base is spread out all over
the world. Needless to say that such a large forum is a great place to get information.
This is a fast way to communicate with the Proxmox VE community via email.
Proxmox VE is fully open source and contributions are welcome! The primary communication channel for
developers is the:
Proxmox Server Solutions Gmbh also offers enterprise support available as Proxmox VE Subscription Ser-
vice Plans. All users with a subscription get access to the Proxmox VE Enterprise Repository, and—with
a Basic, Standard or Premium subscription—also to the Proxmox Customer Portal. The customer portal
provides help and support with guaranteed response times from the Proxmox VE developers.
For volume discounts, or more information in general, please contact [email protected].
Proxmox runs a public bug tracker at https://bugzilla.proxmox.com. If an issue appears, file your report there.
An issue can be a bug as well as a request for a new feature or enhancement. The bug tracker helps to keep
track of the issue and will send a notification once it has been solved.
The project started in 2007, followed by a first stable version in 2008. At the time we used OpenVZ for
containers, and KVM for virtual machines. The clustering features were limited, and the user interface was
simple (server generated web page).
Proxmox VE Administration Guide 7 / 464
But we quickly developed new features using the Corosync cluster stack, and the introduction of the new
Proxmox cluster file system (pmxcfs) was a big step forward, because it completely hides the cluster com-
plexity from the user. Managing a cluster of 16 nodes is as simple as managing a single node.
We also introduced a new REST API, with a complete declarative specification written in JSON-Schema.
This enabled other people to integrate Proxmox VE into their infrastructure, and made it easy to provide
additional services.
Also, the new REST API made it possible to replace the original user interface with a modern HTML5
application using JavaScript. We also replaced the old Java based VNC console code with noVNC. So
you only need a web browser to manage your VMs.
The support for various storage types is another big task. Notably, Proxmox VE was the first distribution to
ship ZFS on Linux by default in 2014. Another milestone was the ability to run and manage Ceph storage on
the hypervisor nodes. Such setups are extremely cost effective.
When we started we were among the first companies providing commercial support for KVM. The KVM
project itself continuously evolved, and is now a widely used hypervisor. New features arrive with each
release. We developed the KVM live backup feature, which makes it possible to create snapshot backups on
any storage type.
The most notable change with version 4.0 was the move from OpenVZ to LXC. Containers are now deeply
integrated, and they can use the same storage and network features as virtual machines.
Contributions and improvements to the Proxmox VE documentation are always welcome. There are several
ways to contribute.
If you find errors or other room for improvement in this documentation, please file a bug at the Proxmox bug
tracker to propose a correction.
If you want to propose new content, choose one of the following options:
• The wiki: For specific setups, how-to guides, or tutorials the wiki is the right option to contribute.
• The reference documentation: For general content that will be helpful to all users please propose your con-
tribution for the reference documentation. This includes all information about how to install, configure, use,
and troubleshoot Proxmox VE features. The reference documentation is written in the asciidoc format. To
edit the documentation you need to clone the git repository at git://git.proxmox.com/git/pve-docs
then follow the README.adoc document.
Note
If you are interested in working on the Proxmox VE codebase, the Developer Documentation wiki article
will show you where to start.
The Proxmox VE user interface is in English by default. However, thanks to the contributions of the commu-
nity, translations to other languages are also available. We welcome any support in adding new languages,
translating the latest features, and improving incomplete or inconsistent translations.
Proxmox VE Administration Guide 8 / 464
We use gettext for the management of the translation files. Tools like Poedit offer a nice user interface to edit
the translation files, but you can use whatever editor you’re comfortable with. No programming knowledge is
required for translating.
The language files are available as a git repository. If you are familiar with git, please contribute according
to our Developer Documentation.
You can create a new translation by doing the following (replace <LANG> with the language ID):
Or you can edit an existing translation, using the editor of your choice:
# poedit <LANG>.po
Even if you are not familiar with git, you can help translate Proxmox VE. To start, you can download the
language files here. Find the language you want to improve, then right click on the "raw" link of this language
file and select Save Link As. . . . Make your changes to the file, and then send your final translation directly
to office(at)proxmox.com, together with a signed contributor license agreement.
In order for the translation to be used in Proxmox VE, you must first translate the .po file into a .js file.
You can do this by invoking the following script, which is located in the same repository:
The resulting file pve-lang-xx.js can then be copied to the directory /usr/share/pve-i18n, on
your proxmox server, in order to test it out.
Alternatively, you can build a deb package by running the following command from the root of the repository:
# make deb
Important
For either of these methods to work, you need to have the following perl packages installed on your
system. For Debian/Ubuntu:
Proxmox VE Administration Guide 9 / 464
You can send the finished translation (.po file) to the Proxmox team at the address office(at)proxmox.com,
along with a signed contributor licence agreement. Alternatively, if you have some developer experience,
you can send it as a patch to the Proxmox VE development mailing list. See [Developer Documentation]
Proxmox VE Administration Guide 10 / 464
Chapter 2
Installing Proxmox VE
Proxmox VE is based on Debian. This is why the install disk images (ISO files) provided by Proxmox include
a complete Debian system (Debian 10 Buster for Proxmox VE version 6.x) as well as all necessary Proxmox
VE packages.
The installer will guide through the setup, allowing you to partition the local disk(s), apply basic system
configurations (for example, timezone, language, network) and install all required packages. This process
should not take more than a few minutes. Installing with the provided ISO is the recommended method for
new and existing users.
Alternatively, Proxmox VE can be installed on top of an existing Debian system. This option is only recom-
mended for advanced users because detailed knowledge about Proxmox VE is required.
We recommend using high quality server hardware, when running Proxmox VE in production. To further
decrease the impact of a failed host, you can run Proxmox VE in a cluster with highly available (HA) virtual
machines and containers.
Proxmox VE can use local storage (DAS), SAN, NAS, and distributed storage like Ceph RBD. For details see
chapter storage Chapter 7.
These minimum requirements are for evaluation purposes only and should not be used in production.
• Hard drive
• Memory: Minimum 2 GB for the OS and Proxmox VE services, plus designated memory for guests. For
Ceph and ZFS, additional memory is required; approximately 1GB of memory for every TB of used storage.
• Fast and redundant storage, best results are achieved with SSDs.
• OS storage: Use a hardware RAID with battery protected write cache (“BBU”) or non-RAID with ZFS
(optional SSD for ZIL).
• VM storage:
– For local storage, use either a hardware RAID with battery backed write cache (BBU) or non-RAID for
ZFS and Ceph. Neither ZFS nor Ceph are compatible with a hardware RAID controller.
– Shared and distributed storage is possible.
• Redundant (Multi-)Gbit NICs, with additional NICs depending on the preferred storage technology and
cluster setup.
• For PCI(e) passthrough the CPU needs to support the VT-d/AMD-d flag.
To get an overview of the CPU and hard disk performance on an installed Proxmox VE system, run the
included pveperf tool.
Note
This is just a very quick and general benchmark. More detailed tests are recommended, especially re-
garding the I/O performance of your system.
To access the web-based user interface, we recommend using one of the following browsers:
• Firefox, a release from the current year, or the latest Extended Support Release
When accessed from a mobile device, Proxmox VE will show a lightweight, touch-based interface.
Proxmox VE Administration Guide 12 / 464
• A raw sector (IMG) image file ready to copy to a USB flash drive (USB stick).
Using a USB flash drive to install Proxmox VE is the recommended way because it is the faster option.
Note
Do not use UNetbootin. It does not work with the Proxmox VE installation image.
Important
Make sure that the USB flash drive is not mounted and does not contain any important data.
On Unix-like operating system use the dd command to copy the ISO image to the USB flash drive. First find
the correct device name of the USB flash drive (see below). Then run the dd command.
Note
Be sure to replace /dev/XYZ with the correct device name and adapt the input filename (if ) path.
Caution
Be very careful, and do not overwrite the wrong disk!
Proxmox VE Administration Guide 13 / 464
There are two ways to find out the name of the USB flash drive. The first one is to compare the last lines of
the dmesg command output before and after plugging in the flash drive. The second way is to compare the
output of the lsblk command. Open a terminal and run:
# lsblk
Then plug in your USB flash drive and run the command again:
# lsblk
A new device will appear. This is the one you want to use. To be on the extra safe side check if the reported
size matches your USB flash drive.
Tip
macOS tends to automatically add .dmg to the output file name.
# diskutil list
Now insert the USB flash drive and run this command again to determine which device node has been
assigned to it. (e.g., /dev/diskX).
# diskutil list
# diskutil unmountDisk /dev/diskX
Note
replace X with the disk number from the last command.
Note
rdiskX, instead of diskX, in the last command is intended. It will increase the write speed.
Proxmox VE Administration Guide 14 / 464
Using Etcher
Etcher works out of the box. Download Etcher from https://etcher.io. It will guide you through the process of
selecting the ISO and your USB Drive.
Using Rufus
Rufus is a more lightweight alternative, but you need to use the DD mode to make it work. Download Rufus
from https://rufus.ie/. Either install it or use the portable version. Select the destination drive and the Proxmox
VE ISO file.
Important
Once you Start you have to click No on the dialog asking to download a different version of GRUB.
In the next dialog select the DD mode.
• The Proxmox VE installer, which partitions the local disk(s) with ext4, xfs or ZFS and installs the operating
system.
• Complete toolset for administering virtual machines, containers, the host system, clusters and all neces-
sary resources
Note
All existing data on the for installation selected drives will be removed during the installation process. The
installer does not add boot menu entries for other operating systems.
Please insert the prepared installation media Section 2.2 (for example, USB flash drive or CD-ROM) and
boot from it.
Tip
Make sure that booting from the installation medium (for example, USB) is enabled in your servers firmware
settings.
Proxmox VE Administration Guide 15 / 464
After choosing the correct entry (e.g. Boot from USB) the Proxmox VE menu will be displayed and one of
the following options can be selected:
Install Proxmox VE
Starts the normal installation.
Tip
It’s possible to use the installation wizard with a keyboard only. Buttons can be clicked by pressing the
ALT key combined with the underlined character from the respective button. For example, ALT + N to
press a Next button.
Rescue Boot
With this option you can boot an existing installation. It searches all attached hard disks. If it finds
an existing installation, it boots directly into that disk using the Linux kernel from the ISO. This can be
Proxmox VE Administration Guide 16 / 464
useful if there are problems with the boot block (grub) or the BIOS is unable to read the boot block
from the disk.
Test Memory
Runs memtest86+. This is useful to check if the memory is functional and free of errors.
After selecting Install Proxmox VE and accepting the EULA, the prompt to select the target hard disk(s) will
appear. The Options button opens the dialog to select the target file system.
The default file system is ext4. The Logical Volume Manager (LVM) is used when ext4 or xfs ist selected.
Additional options to restrict LVM space can be set (see below).
Proxmox VE can be installed on ZFS. As ZFS offers several software RAID levels, this is an option for
systems that don’t have a hardware RAID controller. The target disks must be selected in the Options
dialog. More ZFS specific settings can be changed under Advanced Options (see below).
Warning
ZFS on top of any hardware RAID is not supported and can result in data loss.
Proxmox VE Administration Guide 17 / 464
The next page asks for basic configuration options like the location, the time zone, and keyboard layout. The
location is used to select a download server close by to speed up updates. The installer usually auto-detects
these settings. They only need to be changed in the rare case that auto detection fails or a different keyboard
layout should be used.
Proxmox VE Administration Guide 18 / 464
Next the password of the superuser (root) and an email address needs to be specified. The password must
consist of at least 5 characters. It’s highly recommended to use a stronger password. Some guidelines are:
• Avoid character repetition, keyboard patterns, common dictionary words, letter or number sequences, user-
names, relative or pet names, romantic links (current or past), and biographical information (for example
ID numbers, ancestors’ names or dates).
The email address is used to send notifications to the system administrator. For example:
The last step is the network configuration. Please note that during installation you can either use an IPv4 or
IPv6 address, but not both. To configure a dual stack node, add additional IP addresses after the installation.
Proxmox VE Administration Guide 20 / 464
The next step shows a summary of the previously selected options. Re-check every setting and use the
Previous button if a setting needs to be changed. To accept, press Install. The installation starts to
format disks and copies packages to the target. Please wait until this step has finished; then remove the
installation medium and restart your system.
Proxmox VE Administration Guide 21 / 464
If the installation failed check out specific errors on the second TTY (‘CTRL + ALT + F2’), ensure that the
systems meets the minimum requirements Section 2.1.1. If the installation is still not working look at the how
to get help chapter Section 1.10.
Further configuration is done via the Proxmox web interface. Point your browser to the IP address given
during installation (https://youripaddress:8006).
Note
Default login is "root" (realm PAM) and the root password is defined during the installation process.
The installer creates a Volume Group (VG) called pve, and additional Logical Volumes (LVs) called root,
data, and swap. To control the size of these volumes use:
hdsize
Defines the total hard disk size to be used. This way you can reserve free space on the hard disk for
further partitioning (for example for an additional PV and VG on the same hard disk that can be used
for LVM storage).
Proxmox VE Administration Guide 22 / 464
swapsize
Defines the size of the swap volume. The default is the size of the installed memory, minimum 4 GB
and maximum 8 GB. The resulting value cannot be greater than hdsize/8.
Note
If set to 0, no swap volume will be created.
maxroot
Defines the maximum size of the root volume, which stores the operation system. The maximum
limit of the root volume size is hdsize/4.
maxvz
Defines the maximum size of the data volume. The actual size of the data volume is:
datasize = hdsize - rootsize - swapsize - minfree
Where datasize cannot be bigger than maxvz.
Note
In case of LVM thin, the data pool will only be created if datasize is bigger than 4GB.
Note
If set to 0, no data volume will be created and the storage configuration will be adapted accordingly.
minfree
Defines the amount of free space left in the LVM volume group pve. With more than 128GB storage
available the default is 16GB, else hdsize/8 will be used.
Note
LVM requires free space in the VG for snapshot creation (not required for lvmthin snapshots).
The installer creates the ZFS pool rpool. No swap space is created but you can reserve some unpartitioned
space on the install disks for swap. You can also create a swap zvol after the installation, although this can
lead to problems. (see ZFS swap notes).
ashift
Defines the ashift value for the created pool. The ashift needs to be set at least to the sector-
size of the underlying disks (2 to the power of ashift is the sector-size), or any disk which might be
put in the pool (for example the replacement of a defective disk).
compress
Defines whether compression is enabled for rpool.
Proxmox VE Administration Guide 23 / 464
checksum
Defines which checksumming algorithm should be used for rpool.
copies
Defines the copies parameter for rpool. Check the zfs(8) manpage for the semantics, and why
this does not replace redundancy on disk-level.
hdsize
Defines the total hard disk size to be used. This is useful to save free space on the hard disk(s) for
further partitioning (for example to create a swap-partition). hdsize is only honored for bootable
disks, that is only the first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
ZFS works best with a lot of memory. If you intend to use ZFS make sure to have enough RAM available for
it. A good calculation is 4GB plus 1GB RAM for each TB RAW disk space.
ZFS can use a dedicated drive as write cache, called the ZFS Intent Log (ZIL). Use a fast drive (SSD) for it.
It can be added after installation with the following command:
Proxmox VE ships as a set of Debian packages and can be installed on to of a standard Debian installation.
After configuring the repositories Section 3.1 you need to run the following commands:
# apt-get update
# apt-get install proxmox-ve
Installing on top of an existing Debian installation looks easy, but it presumes that the base system has been
installed correctly and that you know how you want to configure and use the local storage. You also need to
configure the network manually.
In general, this is not trivial, especially when LVM or ZFS is used.
A detailed step by step how-to can be found on the wiki.
Proxmox VE Administration Guide 24 / 464
Chapter 3
The following sections will focus on common virtualization tasks and explain the Proxmox VE specifics re-
garding the administration and management of the host machine.
Proxmox VE is based on Debian GNU/Linux with additional repositories to provide the Proxmox VE related
packages. This means that the full range of Debian packages is available including security updates and
bug fixes. Proxmox VE provides it’s own Linux kernel based on the Ubuntu kernel. It has all the necessary
virtualization and container features enabled and includes ZFS and several extra hardware drivers.
For other topics not included in the following sections, please refer to the Debian documentation. The De-
bian Administrator’s Handbook is available online, and provides a comprehensive introduction to the Debian
operating system (see [Hertzog13]).
Proxmox VE uses APT as its package management tool like any other Debian-based system. Repositories
are defined in the file /etc/apt/sources.list and in .list files placed in /etc/apt/sources.list.
Each line defines a package repository. The preferred source must come first. Empty lines are ignored. A #
character anywhere on a line marks the remainder of that line as a comment. The available packages from a
repository are acquired by running apt-get update. Updates can be installed directly using apt-get,
or via the GUI.
File /etc/apt/sources.list
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
This is the default, stable, and recommended repository, available for all Proxmox VE subscription users. It
contains the most stable packages and is suitable for production use. The pve-enterprise repository
is enabled by default:
File /etc/apt/sources.list.d/pve-enterprise.list
The root@pam user is notified via email about available updates. Click the Changelog button in the GUI to
see more details about the selected update.
You need a valid subscription key to access the pve-enterprise repository. Different support levels are
available. Further details can be found at https://www.proxmox.com/en/proxmox-ve/pricing.
Note
You can disable this repository by commenting out the above line using a # (at the start of the
line). This prevents error messages if you do not have a subscription key. Please configure the
pve-no-subscription repository in that case.
This is the recommended repository for testing and non-production use. Its packages are not as heavily
tested and validated. You don’t need a subscription key to access the pve-no-subscription reposi-
tory.
We recommend to configure this repository in /etc/apt/sources.list.
File /etc/apt/sources.list
# security updates
deb http://security.debian.org/debian-security buster/updates main contrib
This repository contains the latest packages and is primarily used by developers to test new features. To
configure it, add the following line to etc/apt/sources.list:
Proxmox VE Administration Guide 26 / 464
Warning
The pvetest repository should (as the name implies) only be used for testing new features or
bug fixes.
Note
Ceph Octopus (15.2) was declared stable with Proxmox VE 6.3 and is the most recent Ceph release
supported. It will continue to get updates for the remaining life time of the 6.x release [?informaltable].
This repository holds the main Proxmox VE Ceph Octopus packages. They are suitable for production. Use
this repository if you run the Ceph client or a full Ceph cluster on Proxmox VE.
File /etc/apt/sources.list.d/ceph.list
This Ceph repository contains the Ceph packages before they are moved to the main repository. It is used
to test new Ceph releases on Proxmox VE.
File /etc/apt/sources.list.d/ceph.list
Note
Ceph Nautlius (14.2) is the older supported Ceph version, introduced with Proxmox VE 6.0. It will continue
to get updates until end of Q2 2021, so you will eventually need to upgrade to Ceph Octopus.
This repository holds the main Proxmox VE Ceph Nautilus packages. They are suitable for production. Use
this repository if you run the Ceph client or a full Ceph cluster on Proxmox VE.