Proxmox Home Lab Guide 2024
Proxmox Home Lab Guide 2024
2024
This Proxmox home lab e-book contains a compilation of posts I have created in working with Proxmox in the
home lab. While not an exhaustive guide to Proxmox, the posts compiled in this guide cover the basic installation
and configuration of Proxmox, including networking, storage, security, monitoring, and other topics. I have also
included a section on installing pfSense on Proxmox and other specialized lab scenarios.
Since many will be coming from a VMware vSphere background, the guide starts from installing Proxmox in a
nested environment running on VMware vSphere. We conclude coming full circle by installing nested ESXi inside
of Proxmox.
No portion of this book may be reproduced in any form without written permission from the publisher or author,
except as permitted by U.S. copyright law.
This publication is designed to provide accurate and authoritative information in regard to the subject matter
covered. It is distributed with the understanding that neither the author nor the publisher is engaged in rendering
any type of professional services. While the publisher and author have used their best efforts in preparing this
book, they make no representations or warranties with respect to the accuracy or completeness of the contents
of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose.
No warranty may be created or extended by sales representatives or written sales materials. The technical
advice and strategies contained herein may not be suitable for your situation. You should consult with a technical
professional when appropriate in your environment. Neither the publisher nor the author shall be liable for any
loss of profit or any other commercial damages, including but not limited to special, incidental, consequential,
personal, or other damages.
Table of Contents
Nested Proxmox VMware installation in ESXi
Proxmox 8.1 New Features and Download with Software-Defined Network and Secure Boot
Upgrade Proxmox Host to 8.1: Tutorial & Steps
Proxmox Networking for VMware vSphere admins
Proxmox Update No Subscription Repository Configuration
Proxmox VLAN Configuration: Management IP, Bridge, and Virtual Machines
Proxmox Management Interface VLAN tagging configuration
Proxmox Create ISO Storage Location – disk space error
Proxmox iSCSI target to Synology NAS
Proxmox add disk storage space – NVMe drive
Proxmox cluster installation and configuration
Mastering Ceph Storage Configuration in Proxmox 8 Cluster
CephFS Configuration in Proxmox Step-by-Step
Proxmox HA Cluster Configuration for Virtual Machines
Proxmox firewall setup and configuration
Proxmox Container vs VM features and configuration
Proxmox Containers with Fedora CoreOS Install
Proxmox Helper Scripts you can use
Proxmox scripts PowerShell Ansible and Terraform
Proxmox Backup Server: Ultimate Install, Backup, and Restore Guide
Proxmox SDN Configuration Step-by-Step
pfSense Proxmox Install Process and Configuration
Nested ESXi install in Proxmox: Step-by-Step
Nested Proxmox VMware installation in ESXi
January 13, 2022
Proxmox
In working with clients and different environments, you will definitely see many different hypervisors used across the
landscape of enterprise organizations. While I recommend VMware vSphere for business-critical enterprise workloads to
customers, there are use cases where I see other hypervisors used. Proxmox is a really great open-source, free hypervisor
available for use and is even developed for use in enterprise applications. I also know of many in the community running
Proxmox in their home lab environment. If you are like me and like to play around with technology, hypervisors, and other
cool geeky stuff, I find I load a lot of different solutions in the lab. Let’s take a look at nested Proxmox VMware installation
in ESXi and see how you can easily spin up a Proxmox host in a vSphere VM.
What is Proxmox?
Proxmox is easily administered using a rich, fully-featured web interface that actually looks and feels nice. While it is not in
my opinion where the vSphere client is in look and feel, it is quite nice and does the job needed to administer the Proxmox
environment.
Proxmox VE is an open-source hypervisor platform for enterprise virtualization. It provides many features needed to run
production workloads, including virtual machines, containers, software-defined storage, networking, clustering, and other
capabilities out-of-the-box. It is based on Linux, so you get the pure Linux experience for virtualization, containers, and
other facets. Note some of the benefits:
Open-source software
No vendor lock-in
Linux kernel
Fast and easy installation
Easy-to-use with the intuitive web-based management interface
Low administration costs and simple deployment
Huge active community
You will mount the ISO to your virtual machine in VMware vSphere like you would any other OS installation. Create a new
VMware vSphere virtual machine with the following details:
Next, make sure to expose hardware-assisted virtualization to the guest OS for your soon-to-be Proxmox installation. As
most of us are familiar with in our nested ESXi labs, this is a simple checkbox in the properties of your VMware ESXi virtual
machine under the CPU settings.
Exposing CPU hardware virtualization to the guest OS
After booting from the ISO, the Proxmox VE 7.1 installation begins. Select to Install Proxmox VE.
Booting the Proxmos 7.1 VE installer
Next, you can customize the disk partition layout if you choose. However, for my nested Proxmox VMware installation, I am
accepting the defaults.
Select the disk partitioning to be used with the Proxmox VE 7.1 installation
Configure the password for your root account. Also, Proxmox has you enter an email address.
Set the administrator password and email address
Finally, we come to the Summary screen. Here, review the configuration and validate your settings. Then, click Install.
Summary of the Proxmox VE 7.1 installation
After finishing the installation, the Proxmox server will reboot. Below is the boot screen captured as it reboots from the
installation.
Proxmox VE 7.1 boots as a VMware ESXi VM
Finally, we are logged into the Proxmox web GUI using root and the password configured during the installation. Overall,
the nested Proxmox VMware installation in ESXi was straightforward and easy. If you want to play around with Proxmox in
a nested configuration, VMware vSphere provides a great way to do this using the basic functionality we have used for
quite some time with nested ESXi installations.
Logged into the Proxmox VE 7.1 web interface
Wrapping Up
Proxmox is a cool hypervisor that provides a lot of features in an open-source, freely available download. The latest
Proxmox VE 7.1 release has a lot of out-of-the-box features and can be used to run production workloads. If you want to
play around with Proxmox, running the hypervisor inside a nested virtual machine in VMware ESXi is a great way to gain
experience with installing, operating, troubleshooting, and other aspects of the virtualization solution.
You can learn more about Proxmox from their official page found here:
Proxmox 8.1
The Proxmox 8.1 hypervisor has been released with great new features. The official information and documentation show it
is a worthy upgrade for Proxmox 8 systems. Highlights include new software-defined network (SDN) features, secure boot,
flexible notifications, and other new improvements. Let’s dive into this release.
Table of contents
Software-Defined Networking in Proxmox VE 8.1
Enhancing Security with Secure Boot Compatibility
Introducing a Flexible Notification System support
Kernel and Software Updates: Staying Ahead with Proxmox VE 8.1
Comprehensive Support for Ceph Versions
Simplifying Virtual Machine Management
Download and Community Support
Proxmox is Open Source with Professional Support available
Great for home labs
Wrapping up new Proxmox VE 8.1 features
SDN in Proxmox VE 8.1 enables you to create virtual zones and networks, enabling you to manage and control complex
networking configurations efficiently, right from the web interface. With this new feature, you can handle complex overlay
networks and enhance multi-tenancy setups.
Proxmox VE 8.1 now includes a signed shim bootloader, making it compliant with most hardware UEFI implementations.
This feature is a great step forward in safeguarding virtualized data centers.
Efi secure boot enabled in proxmox 8.1
It supports diverse notification channels, including local Postfix MTA, Gotify servers, and authenticated SMTP servers. The
new granular control over notifications enhances system monitoring and response capabilities to system events.
New notification system support
This will help to further enhance virtualization performance and storage technologies for virtualization tasks.
New linux kernel update with proxmox 8.1
Also, it adds a VirtIO driver ISO image that is now more straightforward and directly integrated into the VM creation wizard
taking the heavy lifting out of this process.
For enterprise users, Proxmox Server Solutions GmbH offers subscription-based support, ensuring access to tested
updates and professional assistance.
The new Proxmox 8.1 features make it an even more appealing choice for running your critical self-hosted services. I have
been running Proxmox in the home lab for a few years now alongside other hypervisors like vSphere. It is a great solution
that allows you to run VMs and LXC containers without issue.
The web UI is fully-featured, and you can easily get to everything you need in the navigation links in the browser.
For me, I have had no major issues to report with great CPU performance and support for most project solutions I have
installed. You can also passthrough your GPUs such as AMD and nVidia graphics cards. I you want to run a Docker
container host, Proxmox makes for a great underlying hypervisor solution that you can also cluster with multiple hosts for
HA, migration, and scalability purposes.
The Proxmox Backup server is also free to run and backup all your critical VM workloads. VM templates are available for
quickly deploy various operating systems from the web-based console.
With the release of Proxmox 8.1, you may be itching to update your Proxmox host in the home lab or production. Let’s look
at the steps to upgrade your Proxmox host to Proxmox 8.1. In the example below, I will be upgrading an 8.0.3 host that I
have running to 8.1.
Table of contents
New features
No enterprise subscription prerequisites
Proxmox 8.1 upgrade steps from the GUI
Steps to upgrade from Proxmox 7.4 to Proxmox 8.1
Mini PC running Proxmox
New features
There are many new features to speak of in Proxmox 8.1. I just uploaded a post covering the new features. However, as a
quick overview, the major new features include:
Software-defined networking
Secure boot
New bulk actions
Upgraded Linux kernel
A new flexible notification system
Upgraded Ceph Reef version
#/etc/apt/sources.list.d/pve-enterprise.list
#/etc/apt/sources.list.d/ceph.list
From: deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise
To: deb http://download.proxmox.com/debian/ceph-reef bookworm no-subscription
First, click your Proxmox host in the GUI. Navigate to System > Updates > Refresh. When you click Refresh, it runs an
“apt-get update”.
Refresh updates after changing the repositories
You will see the Task viewer display the status of the apt-get update.
The status of the apt get update from the GUI
After refreshing the updates, you can click the Upgrade button.
Kicking off the upgrade from the proxmox GUI
It will launch another browser window displaying the prompt for you to enter Y to confirm you want to continue the upgrade
process.
Press y to continue the upgrade process
After all the upgrade process is complete, you will see the note that a new kernel was installed and a reboot is required to
instantiate the new kernel. Here I am typing reboot from the window.
Reboot after the proxmox 8.1 upgrade and the kernel upgrade
First we refresh the updates after we have updated the repository URLs. To do that, run the following commands:
apt update
Running the apt update command to refresh the available updates
After the upgrade is successful from the command line, if you look at your Proxmox host summary, you will see it has
upgraded to 8.1.3, but the Linux kernel is still at version 6.2. So, we need to reboot.
Before we reboot the kernel still shows 6.2
reboot
Running the reboot command to reboot proxmox and install the new kernel
Now, we can check the kernel version again and we see the Linux 6.5 kernel has been installed.
After the reboot the new linux 6.5 kernel has been installed
First, you will need to upgrade Ceph from Pacific to Quincy. The next step involves upgrading Proxmox VE from version
7.4 to 8.1. In the last step, once you have Proxmox VE 8.1 running, you will upgrade your Ceph installation to Reef.
Here are the links to the official documentation on those specific steps:
Upgrading to Proxmox VE 8.1 can be achieved through the ‘apt’ command line tool. It’s important to make sure that your
current system is up to date before starting the upgrade. Detailed steps and guidance are available in the Proxmox VE
documentation.
Yes, Proxmox VE 8.1 supports migrating virtual machines. You can use Proxmox’s built-in tools to move VMs between
hosts, even across different versions, with minimal downtime.
How does the new SDN feature in Proxmox VE 8.1 impact network configuration?
The software-defined network (SDN) feature in Proxmox VE 8.1 allows for more adaptable network infrastructure
configurations. You can now manage complex networking configurations more effectively, including creating virtual zones
for improved network isolation.
Proxmox VE 8.1 continues to offer a great web UI that provides easy management of virtual machines, containers, and
network settings.
Are there any special considerations for Proxmox VE 8.1 with Ceph storage solutions?
Proxmox VE 8.1 supports Ceph Reef 18.2.0 and Ceph Quincy 17.2.7.
Does Proxmox VE 8.1 offer any enhancements in managing Linux containers?
Proxmox VE 8.1 has an updated kernel and software stack and provides improved support for Linux containers (LXC). The
new kernel offers enhanced performance and stability for containerized applications.
How does the newer Linux kernel in Proxmox VE 8.1 benefit users?
The newer Linux Kernel 6.5 in Proxmox VE 8.1 brings many new improvements. These include performance benefits,
better hardware support, and enhanced security features. This helps to provide a more efficient and secure virtual
environment.
What are the best practices for backup and recovery in Proxmox VE 8.1?
You can easily use Proxmox’s backup tools to schedule and manage backups effectively. Backups should be a regular part
of your infrastructure, even in the home lab environment. Backups help make sure you can recover quickly if you have a
hardware failure or accidental data deletion.
One of the challenges that we run into when we are more familiar with one vendor over another is the difference in the
technologies, how they work for customers, what they are called, and how to configure them. On the networking side of
things, this can be the case as well. If you are familiar with VMware vSphere and looking to work with and play around with
Proxmox, Proxmox networking may be foreign to work with when trying to compare it with VMware ESXi networking, etc. In
this Proxmox networking for vSphere admins post for the community, we will look at the equivalent networking
configurations in Proxmox compared to vSphere.
Table of contents
1. Proxmox Linux Bridge equivalent to the VMware vSwitch
2. Proxmox Linux VLANs equivalent to VMware vSwitch Port Groups
3. Distributed Switches
4. Network Adapters
5. Network I/O Control
6. Software-defined networking (SDN)
7. Troubleshooting
Wrapping up
Below:
Ports/Slaves – This shows the physical network adapter ens192 assigned to the default Linux bridge
CIDR – Shows the IP address and mask associated with the Linux bridge
This is very similar to the default vSwitch0 created in VMware ESXi right out of an install. As you can see below, we have
a physical network adapter backing vSwitch0.
Vmware esxi default vswitch0
Unlike the VMware default vSwitch0 and VM Network port group, the default Proxmox Linux Bridge is not VLAN-aware out
of the box. You have to enable this.
When you edit the default Linux Bridge, you will see the checkbox VLAN aware available on the Linux Bridge properties.
Also, you will see basic networking configurations like the IP address and subnet, gateway for routing, etc. Place a check
box in the VLAN aware checkbox.
Now we can apply the configuration. Click the Apply Configuration button. Also, in the preview of the Pending changes,
you will see the new VLAN bridge-ports configuration set to auto, containing the configuration lines:
bridge-ports ens192
bridge-stp off
bridge-vlan-aware yes
bridge-vids 2-4094
Apply the vlan aware configuration
This configuration essentially makes the default Linux Bridge able to understand VLANs and VLAN traffic, so we can add
Linux VLANs.
Creating a linux vlan
Now we can populate the new Linux VLAN with the appropriate configuration. Once you name the VLAN with the
parent vmbr0 interface, you will see the VLAN raw device and VLAN Tag greyed out. This essentially says we are
creating a new Linux VLAN on the parent Linux Bridge interface, vmbr0.
Under the Advanced checkbox, you can set the MTU value in case you are wondering.
Creating the linux vlan from the vmbr0 interface
Now that we have created the child VLAN interface on the vmbr0 Linux bridge, you can see the vmbr0.333 interface listed
now under the network configuration in the navigation tree of System > network.
Proxmox admins would need to manage complex network setups manually with scripting or use third-party tools available
for Proxmox for centralized network management.
4. Network Adapters
When it comes to network adapters for a virtual machine or container, both Proxmox and VMware vSphere support
different types of network adapters. These include:
VMXNET3
E1000
PCI Passthrough
VirtIO (Proxmox)
Below are the options when creating a new virtual machine in Proxmox.
With Proxmox, you can take advantage of one or many of the following Linux networking tools:
New with Proxmox 8.1 is the introduction of software-defined networking capabilities. You can read the official
documentation here. The latest version of Proxmox VE comes with core SDN packages pre-installed. You now have the
option for SDN technology in Proxmox VE, allowing admins to create virtual zones and networks (VNets). SDN can also be
used for advanced load balancing, NAT, and other features.
Software defined networking in proxmox 8.1
Admins can administer intricate network configurations and multi-tenant environments directly through the web interface at
the datacenter level in Proxmox. It allows creating network infrastructure that is more adaptive and responsive and can
scale in line with evolving business requirements.
7. Troubleshooting
As you start to work with Proxmox networking, there may be a need for troubleshooting things when networking isn’t
working correctly. Checking the obvious things like VLANs, VLAN tagging configuration, both in Proxmox, and on your
physical network switch are important. If you are using DHCP and DNS to connect to the host, is DHCP handing out the
correct IP, and do you have records to resolve the Proxmox host?
Wrapping up
No doubt you have seen various posts and content thread posts from the search forums and community support forums
like the Proxmox support forum related to networking issues. These can be challenging, especially when coming from
another hypervisor. Hopefully, this post will help visitors understand Proxmox networking and the security enhancements
available like VLANs, SDN, and others. Proxmox networking isn’t so difficult to setup once you understand the equivalents
from other virtualization environments you may be familiar with.
Proxmox Update No Subscription Repository Configuration
August 23, 2022
Proxmox
If you are delving into running Proxmox VE for your home lab or other use cases and are coming from other hypervisors
you may have been playing around with, like me, you may struggle a bit with some of the basics when getting started
learning the platform. One of those tasks is updating Proxmox to the latest Proxmox VE version. Let’s take a look at how to
update repositories and perform a dist upgrade to the latest version without having a Proxmox subscription.
Proxmox VE is a complete open-source virtualization platform for enterprise virtualization. With PVE you can run virtual
machines and even containers with your Proxmox VE installation.
It also includes a free Proxmox Backup Server that provides an enterprise backup solution for backing up and recovering
your virtual machines, containers, and physical hosts, all in one solution.
Proxmox VE enterprise virtualization hypervisor
You can learn more about and download Proxmox VE from here:
There are a couple of ways you upgrade your Proxmox VE installation, using the Proxmox web interface, or using the apt
get update proxmox ve and apt get upgrade commands from the command line, either at the console or from an SSH
connection.
The good news is even if you have a non-licensed version, non-PVE enterprise installation that is not a paid version, you
can still retrieve software upgrades on your non-enterprise version to update Proxmox.
Like all other Linux distributions, upgrades and updates pull from a repository. Proxmox VE by default is geared towards
production use, and the update and upgrade repositories are pointed to the enterprise repository locations accordingly.
This is because, by default, Proxmox VE points to the enterprise repo to pull down package lists. So, when you download
and install Proxmox VE, it is set up for PVE enterprise and the PVE no subscription configuration is something you can
introduce. Let’s work on the PVE no subscription repository subscription repository.
Update package database error
The files changed a little with Proxmox 8 and higher. Note the following changes you need to make:
#/etc/apt/sources.list.d/pve-enterprise.list
#/etc/apt/sources.list.d/ceph.list
#For Ceph Quincy
From: deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise
To: deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription
Before Proxmox 8
The steps for setting up a PVE no subscription configuration is configured using the etc apt sources.list.d file found at:
/etc/apt/sources.list
Add the following line in the /etc/apt/sources.list file:
Now, we just need to comment out a line in another file located here:
/etc/apt/sources.list.d/pve-enterprise.list
Editing the pve enterprise.list file
After editing and saving both of the above files, we need to run an apt-get update proxmox VE command at the command
line.
After updating the repository with the non enterprise repo, we can perform a non pve enterprise repository upgrade using
the command:
apt dist-upgrade
As you can see below, I have an upgrade that is available for the Proxmox VE server ready to install after configuring the
upgrade to bypass the subscription requirement.
Running an apt dist upgrade command from the command line
Proxmox VE is an open-source server management platform for enterprise virtualization. It provides integration with the
KVM hypervisor and Linux Containers (LXC), software-defined storage and networking functionality, on a single platform.
You can use the web-based user interface to manage virtual machines, LXC containers, Proxmox clusters, or integrate
disaster recovery tools.
By default, Proxmox VE is pointed to the enterprise repositories which requires a subscription to perform updates.
However, this is a minor configuration change to bypass the enterprise repo and point to the non enterprise repo for pulling
down updates.
There are essentially two files that you need to edit, the /etc/apt/sources.list file and the /etc/apt/sources.list.d/pve-
enterprise.list file. After editing the files with the configuration listed above, you run an apt-get update and then the
command apt dist-upgrade.
Is Proxmox free?
Yes, Proxmox is free to download and install in your environment. Additionally, as shown, you can change from the
enterprise version of the update proxmox repository to the non enterprise version.
Wrapping Up
Proxmox VE is a great platform for the home lab or for enterprise use and provides many great capabilities to run virtual
machines and containerized workloads in your environment. By editing just a few minor configuration files, you can easily
bypass the requirement for the subscription when updating Proxmox VE installations with the latest upgrades. It allows
keeping Proxmox installations up to date with the latest security patches and other upgrades from Proxmox.
Proxmox VLAN Configuration: Management IP, Bridge, and Virtual Machines
December 11, 2023
Proxmox
Proxmox vlans
Proxmox is a free and open-source hypervisor with enterprise features for virtualization. Many may struggle with Proxmox
networking and understanding concepts such as Proxmox VLAN configuration. If you are running VLANs in your network,
you may want your Proxmox VE management IP on your management VLAN, or you may need to connect your virtual
machines to separate VLANs. Let’s look and see how we can do this.
Table of contents
What are VLANs?
Network and VLAN terms
Proxmox default network configuration
Make the default Proxmox VE Linux bridge VLAN-aware
Physical network switch tagging
Setting the Proxmox Management interface IP on a different VLAN
Change Proxmox VE host file reference to old IP
Configuring VLANs in Proxmox VE web interface
Web Interface Configuration
Advanced Configurations
Trunk Ports
VLAN Aware Bridges
Routing between VLANs
Troubleshooting
Frequently Asked Questions on Proxmox VLAN Configuration
network device: A network device is really anything (physical or virtual) that can connect to a computer network
Linux Bridge: A Linux bridge enables more than one network interface to act as a single network device.
Networking Service: Software that manages network connections and traffic flow
Management Interface: In Proxmox VE this is the network interface that allows you to access the web UI and command
line interface of your Proxmox host.
Physical Network Interface (NIC): The physical connection from a computer to a physical network switch port.
Network Interfaces File: In Linux systems this is where you setup the network configuration for your network interfaces.
In the Proxmox network connections, you will see the individual physical adapters and then you will see the Proxmox Linux
bridge configured by default.
Proxmox ve 1
Below:
/etc/network/interfaces
You will see this by default. The VLAN aware setting will be unchecked. The bridge port is assigned with the interface that
is uplinked.
Now, to make our bridge VLAN-aware, place a check in the VLAN aware box. Click OK.
reboot
You will see the configuration change and add the VLAN stanzas in the configuration, as you can see in my configuration.
auto vmbr0
iface vmbr0 inet static
address 10.3.33.14/24
gateway 10.3.33.1
bridge-ports eno3
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
Trunking configuration allowing multiple vlans
By default, Proxmox will enable the Linux bridge with a “trunk port” configuration that accepts all VLANs from 2-4094. You
can remove all the VLANs aside from specific VLANs you want to tag, using the following configuration:
auto vmbr0
iface vmbr0 inet static
address 10.3.33.14/24
gateway 10.3.33.1
bridge-ports eno3
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 10,149,222
Below, I have removed the IP from the default bridge, but you can see the restricted bridge-VIDs to specific VLANs.
Below is a screenshot of VLANs configuration and VLAN setup on my Ubiquiti 10 GbE switch. You can see the VLANs
tagging and trunking configured on the switch. The T stands for “tagged”. As you can see below, I have VLANs 10, 19, and
30 tagged on all ports.
Viewing tagged interfaces on a physical network switch
Instead, I have created a VLAN tagged interface, tagged with VLAN 149 for the management interface.
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno3
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
auto vmbr0.149
iface vmbr0.149 inet static
address 10.1.149.14/24
gateway 10.1.149.1
Save your configuration. You can reboot to make the configuration take effect, or you can run the command:ifup -a
Once you have rebooted or ran the ifup command, you should be able to run the ip address command to see the IP
address and interfaces:ip a
Viewing the ip a command to verify the ip address
We can also verify the configuration in the Proxmox VE GUI, looking at the properties of the Proxmox host > Network.
We can also check external connectivity with a simple ping of the new management IP address we have placed on the new
VLAN.
Pinging the new management ip on the proxmox host
The Proxmox VE web interface simplifies VLAN configuration through its GUI.
Advanced Configurations
Now that you understand the basics of VLAN configuration in Proxmox VE, we can explore some advanced topics:
Trunk Ports
A trunk port is a network interface that can carries multiple VLANs traffic. It is a useful configuration for connecting multiple
VLANs and VMs to multiple VLANs. To configure a trunk port on Proxmox VE, you need to:
A VLAN aware bridge is a bridge that understands VLAN tags and can forward traffic to the correct VLAN. This is required
for communicating between VMs on different VLANs. To configure a VLAN-aware bridge on Proxmox VE, you need to:
When you setup new VLANs, devices on one VLAN can’t talk to the devices on the other subnet by default. Generally,
according to best practice, a VLAN will house 1 subnet. So it means your devices on each VLAN will have different IP
addresses on different subnets. You will need to configure a router or firewall that can do routing (like pfSense) between
the devices on different VLANs/subnets so these can communicate.
Troubleshooting
What if you have issues with your Proxmox VLANs?
VLAN tagging helps to segment network traffic, which is key for managing network resources and maintaining traffic flow.
This segmentation allows for better control and isolation of traffic, which is important in environments with multiple virtual
machines.
What are the steps to configure a Linux bridge for VLANs in Proxmox?
Configuring a Linux bridge involves configuring the bridge interface in the network configuration file, setting the bridge to
VLAN-aware, and assigning bridge ports.
Absolutely. VLANs provide network isolation, a significant aspect of securing a virtualized environment. By segregating
network traffic, VLANs help minimize the risk of unauthorized access or data breaches.
The Proxmox VE management IP is used for remote management and access. VLAN configurations ensure that
management traffic is isolated and secure, which is critical for security.
Wrapping up
Creating and configuring VLANs in Proxmox is not too difficult. Once you understand the concepts and where to implement
the configuration, it is actually quite simple. Adding VLANs to your Proxmox VE host will allow you to connect your
virtualized workloads to the various networks that may be running in your network environment and enable traffic to flow
and connect as expected.
Proxmox Management Interface VLAN tagging configuration
September 19, 2022
Proxmox
If you have configured your Promox server in the home lab, most want to segregate their management traffic from the
other types of traffic in their lab environment as part of their network configuration. Making the management interface
VLAN aware ensures your Proxmox server can be connected to a trunk port and carry traffic for various VLANs. Let’s see
how to set up the Proxmox management interface VLAN tagging configuration and the steps involved.
Why segment your Promox VE management traffic?
First, why do you want to segment Proxmox VE management VLAN traffic from the rest of the traffic? Having management
traffic on the same VLAN interface as virtual machines and other types of traffic is a security risk.
You never want to be able to manage the hypervisor host on the same network on which other clients and servers exist. as
you can imagine, if an attacker has compromised the network where a client resides, you don’t want them to have easy
Layer 2 access to the management interface of your hypervisor.
There are many types of VLAN configurations. You can configure “untagged traffic,” meaning traffic that does not have
VLAN tagging, automatically get a specific VLAN tag. Generally, for many, the default VLAN is used for untagged traffic.
VLAN tagging can happen at the switch port level, or the network interface level, as well as the network interface tags
VLAN traffic as it traverses the network. We can tag VLANs from the Proxmox side of things so that traffic is correctly
tagged with the appropriate VLAN.
Proxmox VE supports this setup out of the box. You can specify the VLAN tag when you create a VM. The VLAN tag is
part of the guest network configuration. The networking layer supports different modes to implement VLANs, depending on
the bridge configuration:
VLAN awareness on the Linux bridge: In this case, each guest’s virtual network card is assigned to a VLAN tag, which is
transparently supported by the Linux bridge. Trunk mode is also possible, but that makes configuration in the guest
necessary.
“traditional” VLAN on the Linux bridge: In contrast to the VLAN awareness method, this method is not transparent and
creates a VLAN device with associated bridge for each VLAN. That is, creating a guest on VLAN 5 for example, would
create two interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
Open vSwitch VLAN: This mode uses the OVS VLAN feature.
Guest configured VLAN: VLANs are assigned inside the guest. In this case, the setup is completely done inside the guest
and can not be influenced from the outside. The benefit is that you can use more than one VLAN on a single virtual NIC.
We need to navigate to the Promox host > System > Network and then Edit the properties of the default linux bridge
interface in Promox.
Navigating to the default Linux bridge interface in Proxmox server and editing the default bridge in the Proxmox GUI, we
click the Edit button in the user interface.
Checking the box next to VLAN aware
The first change we need to make is small. We need to tick the box next to VLAN aware. This allows us to configure
Proxmox and the Linux bridge to be aware of vlan tagging for the Linux bridge interface.
Applying the configuration
When we edit the network configuration of the Proxmox node, we need to Apply configuration to the network changes.
This will apply the changes and restart networking services.
The Proxmox Server displays a preview of the etc network interfaces file, which shows the changes made to the default
bridge interface:
bridge-vlan-aware yes
bridge-vids 2-4094
Making changes to etc network interfaces file for the new Linux bridge interface
This is the first part of the Proxmox server configuration for VLAN-aware traffic on the management VLAN for the Proxmox
system. Now we need to make some low-level changes to the etc network interfaces file on the Proxmox host.
We need to edit the file to set the VLAN for the management VLAN and IP address, which is a static address to the new
bridge interface tagged with a VLAN.
Below is an example of the default configuration after we have turned on the VLAN aware setting.
Default configuration
auto vmbr0
address 10.1.149.74/24
gateway 10.1.149.1
bridge-ports ens32
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
auto vmbr1
address 172.16.16.254/24
bridge-ports ens192
bridge-stp off
bridge-fd 0
However, we want to add VLAN tagging from the management interface bridge interface. To do this, we need to change
the configuration to the following. Note below, we take the IP address off the iface vmbr0 configuration or iface eno1 inet
manual config. However, we leave the VLAN configuration intact. We then create another network interface that is very
similar to “subinterface” configuration syntax. We create a vmbr0.<vlan tag> configuration. It is where we place the IP
address configuration for the Linux bridge network device.
With this configuration, the Proxmox IP will now be the static IP address and subnet mask is configured in the new bridge
interface, since these are virtual interfaces off the main Linux bridge shown with the iface vmbr0 inet manual stanza. You
also place the default gateway on the new Linux bridge. You can configure multiple IP addresses across different bridges
configured on your Proxmox server.
VLAN config
auto vmbr0
bridge-ports ens192
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
auto vmbr0.333
address 10.3.33.16/24
gateway 10.3.33.1
The VLAN ID is part of the Layer 2 ethernet frame. If the physical interface of the switch port is not configured correctly,
VLAN traffic for the VLAN ID is discarded.
Virtual machines VLAN traffic
Once we have made the default Linux bridge VLAN aware, virtual machines can also have a VLAN tag associated with
their network configuration. It allows the virtual machine to tag VLAN traffic and be placed on that particular VLAN.
When you create VMs, you can choose to tag the network traffic with a VLAN ID. This allows sending the virtual machine
traffic through the physical device VLAN interface to the rest of the physical network.
The beauty of the VLAN aware bridge is you can have other VLANs configured on other virtual machines, and each can
communicate on the required VLAN interface.
Below is an example of the screen to create a new VM and the networking screen. The VLAN tag field allows typing in your
VLAN interface number.
You can also ensure you have Internet access via inter-VLAN routing on your network switch, firewall, router, etc. VLANs
create a lot of flexibility from a physical cabling, ports, and virtual configuration, providing many opportunities to allow traffic
to flow from your VM guests or physical hosts.
Proxmox resources
Take a look at my Proxmox resources that I have written about below:
If you are working with Proxmox in your home lab or otherwise, one of the first things you will want to do is upload ISO
installation media to your Proxmox host. You can mount a physical CD to your Proxmox host, of course. However, this is
cumbersome and not feasible for remote configurations and installing a wide range of operating systems across the board
in the Proxmox environment.
Uploading ISO installation media to your Proxmox host is the way forward for most. If you are like me, you may run into
issues with a default installation of Proxmox and the partition size configured for ISO images by default. Let’s talk about
Proxmox create ISO storage location and see how this is completed.
An ISO file is a disk image that most software vendors provide to install operating systems. This includes Linux operating
systems like Ubuntu and also Microsoft Windows operating system variants.
When you configure the virtual machine in Proxmox VE, you select the disc image and the ISO image is used as part of
the virtual machine installation process.
In the Server View, click the storage pool location > ISO images > Upload.
Uploading an ISO to Promox VE server
It will launch the box to upload the ISO files. Browse to your ISO file and click the Select file button to point to the ISO
image you want to use and click finish.
As you can see, the upload can be perfromed from the browser, like other hypervisors, such as VMware.
The default Proxmox VE installation storage pool space only included a 10 GB partition for uploading ISO files. When you
upload an ISO image to a Proxmox VE server, it will first attempt to upload the ISO image file to the /var/
As you can see below, the image file is first uploaded to /var/tmp before they are staged into the permanent location found
at /var/lib/vz/template/iso folder.
Uploaded image files in the tmp directory in Proxmox VE
First of all, we need to go through a directory creation process to create a custom location for uploading your operating
system ISO files to your Proxmox VE server for creating your Server VM installation or other operating system VM
installations.
df -h
This is the same command you would use in Ubuntu or other Linux distribution for viewing disk space for the store.
Create a new Proxmox Directory for ISO image upload iso files
Next, we navigate to the Disks > Directory > Create Directory button in Proxmox. Here we can create the directory we
need and format the file storage for uploading ISO files.
Create a new directory in Proxmox for ISO storage
When you click the Create directory button, you will see the following Create Directory dialog box. Select the disk,
filesystem, name, and check the box to Add storage. Then click the Create button.
Below is an example of creating a new ISO image storage location on a Proxmox server host. When you create the new
directory, you can then ensure your ISO images are stored in the new location when uploading ISOs for creating your new
VM hosted in Proxmox.
Create the directory for Proxmox ISO image files
What is Proxmox ISO storage? This is storage in Proxmox allowing you to upload ISO files to storage and use these to
install VM guests in Proxmox.
How do you upload ISO files to Proxmox server? You can do this using a web browser logged into your Proxmox host, or
you can use SCP and an SCP utility like WinSCP.
Why might you get a disk upload error? If you use the default ISO storage location, you may receive the error on screen
when uploading a large iso file operating system installations such as Windows Server 2022.
Wrapping Up
When learning about Proxmox, uploading ISO files to your Proxmox VE server is one of the first steps you will take when
loading operating systems on your Promox host. If you want to learn about installing Proxmox as a virtual machine in
VMware, you can look at my previous article covering that topic here.
Be sure to comment if you have alternative ways of handling the uploading and creation of ISO image file storage in
Proxmox.
Proxmox iSCSI target to Synology NAS
January 19, 2022
Proxmox
Not long ago, I wrote a quick blog post detailing how to install Proxmox inside a VMware virtual machine. However, to
really play around with the hypervisor, it is great to have storage to work with. I could’ve added a local disk to the VM.
However, iSCSI sounded way more interesting, especially with the new addition of the Synology DS1621xs+ in the home
lab environment. Let’s take a look at adding Proxmox iSCSI target to Synology NAS LUN and see what this process looks
like.
Let’s first create the iSCSI target on the Synology NAS device. This process is carried out in the Synology SAN Manager.
Launch SAN Manager and click iSCSI > Create.
Create a new iSCSI target in the Synology SAN Manager
Configure a name for the iSCSI target and configure CHAP if you are using CHAP to secure the connections. For this test,
I am leaving CHAP unchecked.
Name the new iSCSI target and choose CHAP options
From the new iSCSI target wizard, it will prompt you to create or map to a LUN. I am creating a new LUN here.
Set up LUN mapping in Synology SAN Manager
Name the new LUN and configure the Capacity and the Space allocation method (thick or thin).
Set up LUN properties for the new LUN
3. Add a dedicated interface to your Proxmox server (if you don’t have already)
On the Proxmox virtual machine, I have added a secondary NIC to the VM for dedicated iSCSI traffic. Now, we need to
configure the NIC with an IP address. To do this, in the Proxmox GUI, click your host > Network > <your network
adapter> > Edit.
Editing the network adapter properties in Proxmox GUI
Enter the IP address you want to configure to communicate with your iSCSI target on the Synology NAS.
The new network adapter with the configured IP address now shows Active.
From your Proxmox server, ping your Synology iSCSI address to ensure you have connectivity.
Verify you have connectivity to your iSCSI portal target of the Synology NAS
After adding the target, you will see it in your Storage list.
Now that we have the target added, we need to add an LVM to use the iSCSI storage. Click Storage > Add > LVM.
Add a new LVM in Proxmox
Add an ID, Base storage (choose from dropdown), Base volume (choose from dropdown), Volume Group (name this
something intuitive), and Content as Disk image, Container.
You will now see the new iSCSI LUN displayed in your list of storage.
New iSCSI LUN successfully added to Proxmox
Now, when you create a new Virtual Machine, you will see the iSCSI LUN listed as available to select.
Creating a new Proxmox virtual machine you can choose the Synology iSCSI LUN
Wrapping Up
Hopefully, this quick walkthrough of setting up a Proxmox iSCSI target to Synology NAS helps to remove any uncertainty of
how this is configured. From the Synology NAS side, the process is the same no matter which hypervisor you are using.
Generally, the only change in how you add the iSCSI storage comes from the vendor side that you are adding the storage
from. Using VMware vSphere and want to add an iSCSI target to your Synology NAS? Take a look at my post on how to do
that here:
iSCSI Synology VMware Configuration step-by-step
Proxmox add disk storage space – NVMe drive
April 10, 2023
Proxmox
Proxmox is a really great free and open-source virtualization platform that many are using in the home lab environment.
However, one common challenge Proxmox users face is expanding storage space. With NVMe drives now being extremely
cheap, they are a great choice for extra virtualization storage. Let’s walk through the process of adding disk storage space
to an NVMe drive in Proxmox, step by step.
Table of contents
Why add disk space to Proxmox?
Add the physical drive to your Proxmox host
Preparing the NVMe Drive
Creating the Primary Partition
Mounting the New Partition
Adding Storage Space to Proxmox
Performing Backups to the New Storage
Related posts
Wrapping up
NVMe disks are cheap and great for adding speedy virtualization storage to your Proxmox host.
1. Install Parted:
To begin, you must install the Parted utility on your Proxmox server. Select Shell in the web interface to launch the
command shell. This tool is used for manipulating block devices and creating partitions. To install Parted, run the following
command:
Next, use the lsblk command to identify the new NVMe drive you want to add to your Proxmox installation. For example,
the drive may appear as dev/nvme0n1. You will see the normal disks such as dev sda dev sda1.
1. Create a new partition table:
Once you have identified the new disk, you can create a new partition table. Run the following command to create a type
GPT partition table on the NVMe drive:
This command creates a new primary ext4 partition that spans the entire NVMe drive and will remove existing partitions.
So be careful and ensure this is what you would like to do.
mkfs.ext4 /dev/nvme0n1p1
You can name the storage something intuitive using the command:
First, create a new directory to serve as the mount point for the new partition. For example:
mkdir /mnt/vmstorage
Next, edit the /etc/fstab file to ensure the new partition will auto mount upon reboot. Open the file with a text editor like
Nano:
nano /etc/fstab
Add the following line to the file, replacing /dev/nvme0n1p1 with the appropriate device identifier for your NVMe drive:
mount /mnt/vmstorage
3. Click on the “Add” button and select “Directory” from the dropdown menu. There will be several options including lvm
thin think provisioning, volume group, thin provisioning.
4. In the “Add Directory” window, enter a unique ID for the new storage, and set the “Directory” field to the mount point you
created earlier (e.g., /mnt/nvme-storage). Choose the appropriate “Content” types, such as “select Disk image,”
“Container template,” or “Backup.” Click “Add” to save the configuration.
5. The new storage space will now be available in the Proxmox web interface for virtual machines and containers.
Now, when you create a new virtual machine or container you will see the storage available to select in the storage drop
down.
Performing Backups to the New Storage
With the new storage space added to Proxmox, you can also use it as backup storage for your virtual machines and
containers. To create a backup, follow these steps:
1. Select the virtual machine or container you want to back up in the Proxmox web interface.
3. In the “Backup” window, choose the new storage space as the “Storage” target and configure the other backup options
as needed. Click “Backup” to initiate the process.
Related posts
Nested Proxmox VMware installation in ESXi
Proxmox Create ISO Storage Location – disk space error
Proxmox cluster installation and configuration
Proxmox firewall rules configuration
Proxmox Update No Subscription Repository Configuration
Wrapping up
Proxmox is a great solution that is free, open-source, and incorporates many great features in the platform. Adding storage
is fairly straightforward, but does involve a few steps from the command shell to mount the storage, format the disk, and
add it to the system as available storage for virtual machines and containers.
Proxmox cluster installation and configuration
February 13, 2023
Proxmox
Proxmox Cluster is a group of physical servers that work together to provide a virtual environment for creating and
managing virtual machines and other resources. In this blog post, we will go over the steps to build a Proxmox Cluster and
the benefits it provides.
What is Proxmox Cluster?
Proxmox Cluster is a group of physical servers that work together to provide a virtual environment for creating and
managing virtual machines and other resources.
The Proxmox Cluster uses the Proxmox Virtual Environment (VE) to provide a virtual environment for creating and
managing virtual machines.
The Proxmox servers will communicate with each other to perform management tasks and ensure your virtual
environment’s reliability.
Having shared storage is a good idea as this will allow the most seamless and best configuration for production workloads.
It allows workloads to be brought back up quickly if one host fails.
Importance of IP Addresses in Proxmox Cluster
Each node in a Proxmox Cluster must have a unique IP address. The IP addresses are used for cluster communication
and to identify each node in the cluster. It is important to make sure that each node has a unique IP address and that the
addresses are reachable from other nodes in the network.
Choosing the appropriate storage option for your cluster is important based on your needs and the resources available.
The configuration file is stored in a database-driven file system and can be easily modified to meet the needs of your virtual
environment.
In the event of a failure of the main node, the slave node will take over and perform management tasks until the main node
is restored.
A single-node cluster in Proxmox provides many of the benefits of a multi-node cluster, such as creating and managing
virtual machines and using local storage for virtual machine storage.
Additionally, a single node cluster provides a simple and easy-to-use virtual environment well-suited for small or simple
virtual environments.
To set up a single-node cluster in Proxmox, you will need to install Proxmox on a single node and configure the network
settings. Once Proxmox is installed, you can create a new single node cluster using the Proxmox Web GUI or the
command line.
When creating a single node cluster, properly configuring the firewall ensures the virtual environment is secure.
Additionally, it is important to plan properly and backup the virtual machines and configurations to ensure the reliability of
the virtual environment.
The cluster manager is responsible for automatically failing over to the remaining nodes in the event of a failure, ensuring
that your virtual environment remains up and running.
It is important to thoroughly research and plan your Proxmox Cluster to ensure that it meets your needs and provides the
desired level of reliability.
Firewall Requirements
When building a Proxmox Cluster, it is important to consider the firewall requirements. The Proxmox Cluster uses the TCP
port to communicate between nodes, and it is important to ensure that this port is open on the firewall.
Additionally, it is important to consider any security requirements and to properly configure the firewall to meet these
requirements.
A home lab environment typically consists of a small number of physical servers, often only one or two, and is used for
testing and learning purposes.
With a Proxmox Cluster in a home lab environment, you can experience the benefits of a virtual environment, such as high
availability and easy migration of virtual machines, without the need for a large number of physical servers.
When setting up a Proxmox Cluster in a home lab environment, it is important to consider the hardware requirements and
choose hardware compatible with the Proxmox software.
Additionally, it is important to consider the network requirements and properly configure the firewall to ensure the cluster
can communicate with other nodes.
It is also important to properly secure the Proxmox Cluster in a home lab environment. This includes securing the root
password and properly configuring the firewall to prevent unauthorized access.
Proxmox Clusters in home lab environments provide a great opportunity to learn about virtual environments and to gain
hands-on experience with Proxmox. With a Proxmox Cluster in a home lab environment, you can explore the features and
benefits of a virtual environment and develop the skills you need to effectively manage virtual environments in real-world
environments.
The first step in setting up a Proxmox Cluster is to install Proxmox on each node. To do this, you must download the
Proxmox ISO file and create a bootable USB drive. Once the USB drive is created, you can boot each node from the USB
drive and follow the prompts to install Proxmox.
This command will provide the necessary information to join the cluster, including the IP address of the main node and the
cluster communication port.
The corosync communication protocol is used to manage communication between nodes in a Proxmox Cluster. To
configure the corosync communication protocol, you will need to modify the configuration file for the cluster.
This file is stored in a database-driven file system and can be easily modified to meet the needs of your virtual
environment.
Once the Proxmox Cluster is set up, you can add virtual machines. To do this, you must use the Proxmox Web GUI to
create and configure virtual machines.
The virtual machines can be easily migrated between nodes in the cluster, providing flexibility and ease of management.
To ensure the reliability of your virtual environment, it is important to monitor and maintain your Proxmox Cluster. This
includes monitoring the status of the nodes in the cluster, performing regular maintenance tasks, and updating the cluster
software as needed.
To create a Proxmox Cluster using the Proxmox Web GUI, you will need to log in to the Proxmox Web GUI on one of the
nodes in the cluster.
The Proxmox Web GUI can be accessed by navigating to https://<node-ip-address>:8006 in a web browser.
Once the new cluster has been created, you can add additional nodes to the cluster. To do this, click on the “Cluster” tab in
the Proxmox Web GUI and then click on the “Add Node” button. This will open a dialog where you can enter the node’s IP
address you want to add to the cluster.
You will use this join information to join cluster on the second, and third node.
This will open a dialog where you can modify the settings for the corosync communication protocol, including the
communication port and the number of votes required to reach quorum.
This will open a dialog where you can create and configure virtual machines, including specifying the virtual machine
name, the operating system, and the storage location.
To ensure the reliability of your virtual environment, it is important to monitor the cluster and to perform regular
maintenance tasks. This can be done using the Proxmox Web GUI by clicking on the “Cluster” tab and then clicking on the
“Monitor” button.
This will provide information on the status of the nodes in the cluster and will allow you to perform tasks such as live
migrations of virtual machines
1. After a complete failure of the cluster: In the event of a complete failure of the cluster, all configuration information
and state information are lost, and a cluster cold start is necessary to rebuild the cluster from scratch.
2. When setting up a new Proxmox Cluster: When setting up a new Proxmox Cluster, a cluster cold start is necessary
to create a new cluster and configure the cluster from scratch.
3. When changing the cluster configuration: When changing the configuration of an existing Proxmox Cluster, such as
adding or removing nodes, a cluster cold start may be necessary to properly reconfigure the cluster.
A cluster cold start in Proxmox Clusters involves installing Proxmox on each node, configuring the network settings,
creating a new cluster, adding nodes to the cluster, and configuring the corosync communication protocol. This process
can be performed using the Proxmox Web GUI or by using the command line.
It is important to note that a cluster cold start can result in data loss, as all virtual machines and configurations will need to
be recreated. As such, it is important to plan properly and back up all virtual machines and configurations prior to
performing a cluster cold start.
Wrapping Up
Proxmox is a great platform for running home lab workloads and production environments. With Proxmox clusters, you can
set up a high-availability environment to protect your virtual machines from a single node failure in the data center.
If you follow all the steps listed to create a Proxmox cluster, you can easily create a Proxmox cluster using the web UI and
CLI.
Mastering Ceph Storage Configuration in Proxmox 8 Cluster
June 26, 2023
Proxmox
The need for highly scalable storage solutions that are fault-tolerant and offer a unified system is undeniably significant in
data storage. One such solution is Ceph Storage, a powerful and flexible storage system that facilitates data replication
and provides data redundancy. In conjunction with Proxmox, an open-source virtualization management platform, it can
help manage important business data with great efficiency. Ceph Storage is an excellent storage platform because it’s
designed to run on commodity hardware, providing an enterprise-level deployment experience that’s both cost-effective
and highly reliable. Let’s look at mastering Ceph Storage configuration in Proxmox 8 Cluster.
Table of contents
What is Ceph Storage?
What is a Proxmox Cluster and Why is Shared Storage Needed?
Shared storage systems
Why is Ceph Storage a Great Option in Proxmox for Shared Storage?
Understanding the Ceph Storage Cluster
Configuring the Proxmox and Ceph Integration
Installing and Configuring Ceph
Setting up Ceph OSD Daemons and Ceph Monitors
Creating Ceph Monitors
Creating a Ceph Pool for VM and Container storage
Utilizing Ceph storage for Virtual Machines and Containers
Managing Data with Ceph
Ceph Object Storage
Block Storage with Ceph
Ceph File System
Ceph Storage Cluster and Proxmox: A Scalable Storage Solution
Frequently Asked Questions (FAQs)
How does Ceph Storage achieve fault tolerance?
Can Ceph Storage handle diverse data types?
How does cache tiering enhance Ceph’s performance?
How is Ceph storage beneficial for cloud hosting?
What role do metadata servers play in the Ceph file system?
Is Ceph Storage a good fit for enterprise-level deployments?
Video covering Proxmox and Ceph configuration
Wrapping up
Other links you may like
A Ceph storage cluster consists of several different types of daemons: Ceph OSD Daemons (OSD stands for Object
Storage Daemon), Ceph Monitors, Ceph MDS (Metadata Server or metadata server cluster), and others. Each daemon
type plays a distinct role in the operation of the storage system.
Ceph OSD Daemons handle data storage and replication, storing the data across different devices in the cluster. The Ceph
Monitors, on the other hand, track the cluster state, maintaining a map of the entire system, including all the data and
daemons.
Ceph MDS, or metadata servers, are specific to the Ceph File System. They store metadata for the filesystem, which
allows the Ceph OSD Daemons to concentrate solely on data management.
A key characteristic of Ceph storage is its intelligent data placement method. An algorithm called CRUSH (Controlled
Replication Under Scalable Hashing) decides where to store and how to retrieve data, avoiding any single point of failure
and effectively providing fault-tolerant storage.
Secondly, shared storage facilitates load balancing. You can easily move VMs or containers from one node to another,
distributing the workload evenly across the cluster. This movement enhances performance, as no single node becomes a
bottleneck.
Lastly, shared storage makes data backup and recovery more manageable. With all data centrally stored, it’s easier to
implement backup strategies and recover data in case of a failure. In this context, Ceph, with its robust data replication and
fault tolerance capabilities, becomes an excellent choice for shared storage in a Proxmox cluster.
One of the key reasons that Ceph is a great option for shared storage in Proxmox is its scalability. As your data grows,
Ceph can effortlessly scale out to accommodate the increased data volume. You can add more storage nodes to your
cluster at any time, and Ceph will automatically start using them.
Fault tolerance is another reason why Ceph is a great choice. With its inherent data replication and redundancy, you can
lose several nodes in your cluster, and your data will still remain accessible and intact. In addition to this, Ceph is designed
to recover automatically from failures, meaning that it will strive to replicate data to other nodes if one fails.
Ceph’s integration with Proxmox for shared storage enables virtual machines and containers in the Proxmox environment
to leverage the robust Ceph storage system. This integration makes Ceph an even more attractive solution, as it brings its
strengths into a virtualized environment, further enhancing Proxmox’s capabilities.
Finally, Ceph’s ability to provide object, block, and file storage simultaneously allows it to handle a wide variety of
workloads. This versatility means that whatever your shared storage needs, Ceph in a Proxmox environment is likely to be
a solution that can handle it effectively and efficiently.
Ceph clients interface with these components to read and write data, providing a robust, fault-tolerant solution for
enterprise-level deployments. The data stored in the cluster is automatically replicated to prevent loss, thanks to controlled
replication mechanisms.
Before you begin, ensure that your Proxmox cluster is up and running, and the necessary Ceph packages are installed. It’s
essential to note that the configuration process varies depending on the specifics of your existing infrastructure.
Installing and Configuring Ceph
Start by installing the Ceph packages in your Proxmox environment. These packages include essential Ceph components
like Ceph OSD daemons, Ceph Monitors (Ceph Mon), and Ceph Managers (Ceph Mgr).
Click on one of your Proxmox nodes, and navigate to Ceph. When you click Ceph, it will prompt you to install Ceph.
This begins the setup wizard. First, you will want to choose your Repository. This is especially important if you don’t have
a subscription. You will want to choose the No Subscription option. For production environments, you will want to use the
Enterprise repository.
Choosing the Ceph repository and beginning the installation
You will be asked if you want to continue the installation of Ceph. Type Y to continue.
Verify the installation of Ceph storage modules
Ceph installed successfully
Next, you will need to choose the Public Network and the Cluster Network. Here, I don’t have dedicated networks
configured since this is a nested installation. So I am just choose the same subnet for each.
Configuring the public and cluster networks
If you click the Advanced checkbox, you will be able to setup the Number of replicas and Minimum replicas.
Advanced configuration including the number of replicas
At this point, Ceph has been successfully installed on the Proxmox node.
Ceph configured successfully and additional setup steps needed
Repeat these steps on the remaining cluster nodes in your Proxmox cluster configuration.
You’ll need to assign several Ceph OSDs to handle data storage and maintain the redundancy of your data.
Adding an OSD in Proxmox Ceph storage
The OSD begins configuring and adding
The OSD is successfully added to the Proxmox host
Also, set up more than one Ceph Monitor to ensure high availability and fault tolerance.
OSDs added to all three Proxmox nodes
At this point, if we visit the Ceph storage dashboard , we will see the status of the Ceph storage cluster.
Healthy Ceph storage status for the cluster
A Ceph Monitor, often abbreviated as Ceph Mon, is an essential component in a Ceph storage cluster. Its primary function
is to maintain and manage the cluster map, a crucial data structure that keeps track of the entire cluster’s state, including
the location of data, the cluster topology, and the status of other daemons in the system.
Ceph Monitors contribute significantly to the cluster’s fault tolerance and reliability. They work in a quorum, meaning there
are multiple monitors, and a majority must agree on the cluster’s state. This setup prevents any single point of failure, as
even if one monitor goes down, the cluster can continue functioning with the remaining monitors.
By keeping track of the data locations and daemon statuses, Ceph Monitors facilitate efficient data access and help ensure
the seamless operation of the cluster. They are also involved in maintaining data consistency across the cluster and
managing client authentication and authorization.
Here we are adding the 2nd Proxmox node as a monitor. I added the 3rd one as well.
Adding Ceph Monitors to additional Proxmox hosts
Now, the Ceph Pool is automatically added to the Prommox cluster nodes.
Pool added to all three Proxmox nodes
The LXC container creates successfully with no storage issues which is good.
The new LXC container is created successfully on Ceph storage
We can see we have the container up and running without issue. Also, I was able to migrate the LXC container to another
node without issue.
The LXC container operating on the Ceph Pool
Ceph performance dashboard in Proxmox
Object storage in Ceph is done through RADOS (Reliable Autonomic Distributed Object Store). Objects stored are
automatically replicated across different storage devices to ensure data availability and fault tolerance. The CRUSH
algorithm, a scalable hashing technique, controls how the objects are distributed and accessed, thus avoiding any single
point of failure.
Ceph Block Devices, or RADOS Block Devices (RBD), is a part of the Ceph storage system that allows Ceph to interact
with block storage. These block devices can be virtualized, providing a valuable storage solution for virtual machines in the
Proxmox environment. Block storage with Ceph offers features like thin provisioning and cache tiering, further enhancing
data storage efficiency.
Ceph File System
The Ceph File System (CephFS) is another significant feature of Ceph. It’s a POSIX-compliant file system that uses a
Ceph Storage Cluster to store data, allowing for the usual file operations while adding scalability, reliability, and
performance.
The Ceph MDS (metadata servers) play a crucial role in the operation of CephFS. They manage file metadata, such as file
names and directories, allowing the Ceph OSDs to focus on data storage. This separation improves the overall
performance of the Ceph storage system.
This combination enables managing important business data effectively while maintaining redundancy and fault tolerance.
Whether you’re dealing with large file data or smaller objects, using Ceph in a Proxmox environment ensures that your
data is safely stored and easily retrievable.
Cache tiering is a performance optimization technique in Ceph. It uses smaller, faster storage (like SSDs) as a cache for a
larger, slower storage tier. Data is accessed frequently and moved to the cache tier for quicker retrieval. This setup
significantly improves read/write performance, making Ceph an excellent option for high-performance applications.
Ceph is a highly scalable, resilient, and performance-oriented storage system, making it an excellent choice for cloud
hosting. With its fault tolerance, data replication, and block, object, and file storage support, Ceph can effectively handle
the vast and diverse data needs of cloud-based services.
Metadata servers, or Ceph MDS, manage the metadata for the Ceph filesystem. They handle file metadata such as file
names, permissions, and directory structures, allowing the Ceph OSDs to concentrate on data management. This
separation boosts performance, making the file system operations more efficient.
Wrapping up
Ceph storage offers a robust and highly scalable storage solution for Proxmox clusters, making it an excellent option for
anyone seeking an efficient way to manage extensive amounts of data and have a highly available storage location for
workloads in the home lab or in production. By following this guide, you can implement a Ceph storage cluster in your
Proxmox environment and leverage the numerous benefits of this powerful and flexible storage system.
Remember, the versatility of Ceph allows for many configurations tailored to meet specific needs. So, explore the various
features of Ceph storage and find a solution that perfectly fits your data storage and management needs.
Since working with Ceph in Proxmox VE lately, one of the cool features that I wanted to try out was Proxmox CephFS,
which allows you to work with your Ceph installation directly from your clients. It allows mounting file storage to your clients
on top of your Ceph storage pool with some other really cool benefits. Let’s look at CephFS configuration in Proxmox and
see how you can install and configure it.
Table of contents
What is CephFS (CephFS file system)?
CephFS configuration in Proxmox: An Overview of the lab
Installation steps
Installing Ceph client tools in Linux
Ceph fuse
Things you will need for your CephFS configuration in Proxmox
1. The admin keyring
2. The name of the Ceph file system
3. The monitor addresses of your Proxmox CephFS servers
4. A ceph.config file
Connect a Linux client to CephFS running on Proxmox
Run the mount command to mount the Ceph file system
Troubleshooting and support
FAQs on CephFS configuration in Proxmox
CephFS can handle vast amounts of file metadata and data and be installed on commodity virtualization hardware. It is an
excellent solution for many use cases, especially when integrated with a Ceph storage cluster, as we can do in Proxmox.
As a note, in this example I am running Proxmox VE version 8.1.3 and Ceph Quincy, which are the latest updates to the
platform from the official site with various security enhancements and features. For the lab, I am running a simple 4 node
member cluster (started with 3 but was doing other testing and added a node) in nested virtual machines on an SSD disk
with 3/2 Crush rule. You can configure different rules based on your needs and infrastructure.
I set to replicated rule and a single NIC (multiple NICs and networks are recommended) for each machine running
pveceph. In this small configuration, it leads to a significant amount of space used with replicas taking up 75% of the
capacity in order to created the replicated data and additional writes with changes.
Installation steps
First, click the CephFS menu under Ceph for your Proxmox host. Next, you click the Create button in the Proxmox web
app.
1) In my lab, I made each Proxmox host a Metadata server. 2) Click the Create CephFS button at the top.
Name
Placement groups: default 128
Add as Storage checked
Click Create.
Create cephfs after creating metadata servers
In the Task viewer you will see the status of the task which should complete successfully.
Viewing the create cephfs task
If you choose to mount as storage, you will see the CephFS storage listed under your Proxmox host(s). Also, the great
thing about the CephFS storage is you can use it to store things like ISOs, etc on top of your Ceph storage pools. Note in
the navigation, we see the types of resources and content we can store, including ISO disks, etc.
Viewing the cephfs storage in proxmox
Ceph fuse
Also, you can install the ceph fuse package. The ceph-fuse package is an alternate way of mounting CephFS. The
difference is it mounts it in the userspace. The performance of ceph-fuse is not as good as the more traditional mounting of
a CephFS file system.
However, it does allow you to connect to a Ceph distributed file system from a user’s perspective, without the need to
integrate it deeply into the system’s core.
You can specify which Ceph file system to connect to either through a command line option (-m) or by using a configuration
file (ceph.conf). This tool mounts the Ceph file system at a designated location on your system.sudo apt install ceph-fuse
Installing ceph fuse components
To see the admin credentials that you need to mount the CephFS file system, you need to get your key from
the ceph.client.admin.keyring file. To get this, run the command:cat /etc/pve/priv/ceph.client.admin.keyring
You will see the value in the key section of the file. Note the user is admin and not root.
Viewing the admin key in proxmox for cephfs
You will see the name of the file system. The default name is cephfs.
You will need to have the Ceph monitor server addresses. There should be multiple servers configured as monitors for
reliability and so you don’t have a single point of failure.
You can file these hosts addresses under the Ceph > Monitor menu in the Proxmox GUI in the browser. Make sure your
router or routers have the routes configured to allow your client devices to have connectivity to these IP addresses and
port configurations.
Viewing the proxmox ceph monitor addresses
4. A ceph.config file
You will also need a ceph.config file. Like the admin keyring, we can also copy the file from the Promxox server. But we
will trim some of the information out of the Proxmox server file. This file is located here on your Proxmox
server:/etc/pve/ceph.config
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.1.149.61/24
fsid = 75a2793d-00b7-4da5-81ce-48347089734d
mon_allow_pool_delete = true
mon_host = 10.1.149.61 10.1.149.63 10.1.149.62
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.1.149.61/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
[mds.pmox01]
host = pmox01
mds_standby_for_name = pve
[mds.pmox02]
host = pmox02
mds_standby_for_name = pve
[mds.pmox03]
host = pmox03
mds_standby_for_name = pve
[mds.pmox04]
host = pmox04
mds_standby_for_name = pve
[mon.pmox01]
public_addr = 10.1.149.61
[mon.pmox02]
public_addr = 10.1.149.62
[mon.pmox03]
public_addr = 10.1.149.63
mkdir /etc/ceph
admin.keyring
ceph.conf
Running the tree command on the directory housing the cephfs configuration files
In the admin.keyring file, just put the key value in the file, nothing else. It will be a value as we had shown above that
looks similar to this:
AQAgPphlUFChMBAA2OsC3bdQ54rFA+1yqqjGKQ==
Then, you will need the following in your ceph.conf file. As you can see below, I have updated the keyring location to point
to our admin.keyring file.
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.1.149.0/24
fsid = 75a2793d-00b7-4da5-81ce-48347089734d
mon_allow_pool_delete = true
mon_host = 10.1.149.61 10.1.149.63 10.1.149.62
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.1.149.0/24
[client]
keyring = /etc/ceph/admin.keyring
mkdir /mnt/cephfs
Now that we have a directory, we can run the following command to mount the CephFS file system for a connection to the
IP address of each monitor node.
The command will complete without any return if it is successful. We can run the following to see our mounted Ceph file
system:
df -h
Like any technology, there may be times when you need to troubleshoot something with CephFS. CephFS does not
require a subscription license as it is free and open-source and can be pulled from the no-subscription repository.
Customers can of course, opt for enterprise support for your Proxmox cluster with the customer portal from the Proxmox
team. If you still go the open-source route, the Proxmox support forum on the Internet is a great source of help for visitors
across tens of thousands of threads thanks to activity members in the community. In addition, you can search forums for a
wide variety of topics, instructions, question-answer type posts, etc.
There are a number of other home forums and websites, links, wiki sites and thread search titles where you can find
people with experience to help with troubleshooting warning messages and errors, and share log data.
CephFS is integrated with Proxmox and enhances object storage capabilities. It works alongside Ceph’s RADOS Gateway
(RGW) and allows storing and retrieving objects in separate RADOS pools. It enables both file and object storage.
Can CephFS Handle Erasure Coding for Data Protection?
CephFS supports erasure coding within its storage clusters. Erasure coding provides an efficient way to store data by
breaking it up into chunks as opposed to traditional replication methods. It helps in large-scale deployments where data
protection is of primary concern.
The CRUSH algorithm specifies how data is stored across the cluster, enabling efficient distribution and availability. It
allows scaling storage without compromising data access speed.
In Proxmox, How Does CephFS Ensure High Availability for Stored Data?
CephFS ensures high availability in Proxmox through its resilient cluster design. It replicates file data and metadata across
different nodes. In node failures, the system automatically redistributes data to maintain access and integrity.
When deploying CephFS in production environments, you need to make sure you have redundancy built-in with your
cluster configuration, metadata servers, and Ceph monitors. Proper configuration helps maintain performance and stability
in high-demand scenarios.
CephFS can be monitored with external monitoring systems. This can help provide insights into cluster health and
performance. These systems can track metrics like storage utilization, I/O performance, and node status.
CephFS fully supports snapshots and writable clones within Proxmox. This feature allows you to create point-in-time
copies of files and directories, for data recovery and testing purposes.
It manages file system metadata, ensuring fast file and directory information access. MDS scales horizontally to handle
increasing workloads, making it a key component in large-scale CephFS deployments.
Proxmox also makes Proxmox Backup Server for protecting your Proxmox data as well as Proxmox Mail Gateway for
mailflow services.
CephFS filesystem inherits all the HA and scalability benefits of the Ceph storage pool. You can have multiple CephFS
monitors, etc, in the case of a failure. These features allow the cluster to handle failed cluster nodes and leverage Ceph’s
distributed object store for data redundancy.
Understanding the Ceph storage cluster is crucial for optimal CephFS configuration. Ceph’s distributed object store runs
the file system CephFS services and provides a unified system for both file storage and block storage (vms), simplifying
the configuration and providing HA and resiliency.
Metadata servers (MDS) in CephFS are responsible for storing and managing file metadata. This is important in the overall
file system’s performance. These servers allow efficient access and writing of file data blocks, for the Ceph file system’s
scalability and speed.
If you are learning the Proxmox hypervisor or want high-availability cluster resources for learning and self-hosting services
with some resiliency, building a cluster is not too difficult. Also, you can easily create Proxmox HA virtual machine
clustering once you create cluster nodes. Let’s look at Proxmox HA virtual machine deployment and how to ensure your
VM is protected against failure and increase uptime, much like VMware HA.
Table of contents
Proxmox cluster: the starting point
Shared storage
Setting Up Your Proxmox Cluster
Key Steps in Creating a Proxmox Cluster
Configuring Virtual machine HA
High Availability Setup Requirements
Configuring HA groups (optional)
Fencing device configuration
Rebooting Proxmox Servers running HA
Frequently Asked Questions About Proxmox HA Configuration
Wrapping Up
Below, I have three nodes in a Proxmox cluster running Proxmox 8.1.3 in the Proxmox UI.
A Proxmox cluster includes multiple Proxmox servers or nodes that operate together as a logical unit to run your
workloads. Understanding how to set up and manage the PVE cluster service effectively is important to ensure your VM
data is protected and you have containers hardware redundancy.
Remember that this doesn’t replace all the other best practices with hardware configurations, such as redundant network
hardware and power supplies in your Proxmox hosts and UPS battery backup as the basics.
Shared storage
When you are thinking about a Proxmox cluster and virtual machine high availability, you need to consider integration with
shared storage as part of your design. Shared storage is a requirement so that all Proxmox cluster hosts have access to
the data for your VMs. If a Proxmox host goes down, the other Promox hosts can pick up running the VM with the data
they already have access to.
You can run a Proxmox cluster where each node has local storage, but this will not allow the VM to be highly available.
For my test cluster, I configured Proxmox Ceph storage. However, many other types of shared storage can work such as
an iSCSI or other connection to a ZFS pool, etc. Below, we are navigating to Ceph and choosing to Install Ceph.
This launches the Info screen. Here I am choosing to install Ceph Reef and using the No-Subscription repo.
Starting the ceph setup
Name
Size
Min Size
Crush Rule
# of PGs
PG autoscale mode
A healthy Ceph pool after installing Ceph on all three nodes, creating OSDs, Managers, Monitors, etc.
Let’s look at screnshots of creating a Proxmox cluster and joining nodes to the cluster. Navigate to Datacenter > Cluster >
Create Cluster.
This will launch the Create Cluster dialog box. Name your cluster. It will default to your primary network link. You
can Add links as a failover. Click Create.
You can then click the Cluster join information to display the information needed to join the cluster for the other nodes.
You can click the copy information button to easily copy the join information to the clipboard.
Viewing the join information
On the target node, we can click the Join Cluster button under the Datacenter > Cluster menu.
Now we can use the join information from our first node in the cluster to join additional nodes to the cluster. You will also
need the root password of the cluster node to join the other Proxmox nodes.
Entering the join information
Below, I have created a cluster with 4 Proxmox hosts running Ceph shared storage.
When provisioning Proxmox high availability, there are a number of infrastructure requirements.
1. Shared Storage Configuration: For VMs to migrate seamlessly between nodes, shared storage is a necessity as
we have mentioned above so data does not have to move during a failover.
2. The HA Manager: Proxmox’s HA manager plays a critical role in monitoring and managing the state of VMs across
the cluster. It works like an automated sysadmin. After you configure the resources it should oversee, such as VMs
and containers, the ha-manager monitors their performance and manages the failover of services to another node if
errors occur. Also, the ha-manager can process regular user commands, including starting, stopping, relocating, and
migrating services.
3. Defining HA Groups (optional) : HA groups determine how VMs are distributed across the cluster.
Let’s look at a basic example of configuring a single VM for high availability. Below, in the web interface we have navigated
to the Datacenter > HA > Resources > Add button. Click the Add button.
This will configure a service for the VM to make the VM highly available. The service will start and enter the started state.
Now, we have the VM configured for HA.
Unfenced nodes can access shared resources, posing a risk. For instance, a VM on an unfenced node might still write to
shared storage even if it’s unreachable from the public network, causing race conditions and potential data loss if the VM is
started elsewhere.
Proxmox VE employs various fencing methods, including traditional ones like power cutoffs and network isolation, as well
as self-fencing using watchdog timers. These timers, integral in critical systems, reset regularly to prevent system
malfunctions. If a malfunction occurs, the timer triggers a server reboot. Proxmox VE utilizes built-in hardware watchdogs
on modern servers or falls back to the Linux Kernel softdog when necessary.
Fencing configuration in proxmox ve
Now, I simulated a failure of the Proxmox host by disconnecting the network connection. The pings to the VM start timing
out.
After just a couple of minutes, the VM restarts and starts pinging on a different host.
The proxmox ha virtual machine configuration has brought the vm back up
/etc/init.d/rgmanager stop
Yes, it’s possible to configure HA with two nodes, but it’s not ideal due to the potential risk of split-brain scenarios. For
optimal redundancy and reliability, a minimum of three nodes is recommended.
How does Proxmox handle VM migration in HA setups?
Proxmox automatically migrates VMs from a failed node to a functioning one within the cluster. This process is managed
by the HA manager, which monitors node and VM states to initiate automatic failover.
Key considerations include having a redundant network setup, ensuring reliable IP address allocation, and configuring a
separate network for cluster communication to prevent data traffic interference.
Local storage can be used, but it doesn’t support live migration of VMs in case of node failure. Shared storage solutions
like Ceph or NFS from a NAS as an option are preferred for true HA capabilities and settings.
Proxmox’s HA manager is designed with redundancy. If the primary manager fails and a change, another node in the
cluster takes over its duties and goes into action, ensuring continuous monitoring and management of the HA setup.
Use live migration to move VMs to another node in order to update the original node software. This ensures that your VMs
remain operational during updates, minimizing downtime.
A quorum is used to ensure that decisions (like VM failover) are made reliably in the cluster. It prevents split-brain
scenarios by requiring a majority of nodes to agree on the cluster state.
Wrapping Up
Proxmox virtualization has some great features, including high-availability configuration for virtual machines. In this article,
we have considered the configuration of a high-availability Proxmox VE cluster and then configuring high availability for
VMs. In the comments, let me know if you are running a Proxmox VE cluster, what type of storage you are using, and any
other details you would like to share.
Proxmox firewall setup and configuration
March 2, 2023
Proxmox
Proxmox VE is a great solution for home lab environments and production workloads, including virtual machines and
containers. A great feature of Proxmox VE is the firewall, which enables administrators to manage network traffic to and
from virtual machines and containers. This article will explore the Proxmox firewall and its configuration options.
Many management options exist, including the Proxmox web interface (web GUI) or command-line interface (CLI). These
can be used to configure firewall rules and implement cluster-wide firewall configuration in your Proxmox cluster.
Zones configuration
You can divide the firewall into zones. This combines network interfaces and IP addresses. By default, notice the four
zones available in Proxmox VE.
You can also assign IP addresses to zones and create firewall rules that allow or block traffic based on the zone.
pve-firewall enable
This will start the firewall service and load the firewall configuration files.
This will start the firewall service on boot and load the firewall configuration files.
iptables -L
This will display the current iptables rules managed by the Proxmox firewall service.
This will display the name of the virtual network device used by the VM.
IP Aliases
You can associate a single IP address with multiple network interfaces with IP Aliases. You can configure IP aliases in the
Proxmox firewall using the IP alias configuration file in the /etc/pve/firewall directory.
IP Sets
IP sets define a set of IP addresses that can be used in firewall rules. You can configure IP sets in the Proxmox firewall
using the IP set configuration file in the /etc/pve/firewall directory.
Default Firewall Rules
A set of default firewall rules out of the box allows incoming and outgoing traffic for certain services. These include traffic
types such as SSH and HTTP. You can view the default firewall rules using the following command:
iptables -L
Note the following to define security groups and IP aliases using the following syntax in the configuration files:
iptables -A <zone> -p <protocol> –dport <port> -s <source address> -d <destination address> -j <action>
Cluster Nodes
If you are using a Proxmox cluster, you can configure the firewall rules to apply to all nodes in the cluster. This is done by
configuring the underlying iptables rules automatically and using the same firewall configuration files on all nodes.
Define Rules
To define a firewall rule in the Proxmox firewall, you need to edit the appropriate configuration file in the /etc/pve/firewall
directory. The syntax for adding a firewall rule is as follows:
iptables -A <zone> -p <protocol> –dport <port> -s <source address> -d <destination address> -j <action>
Outgoing Traffic
You can also configure the Proxmox firewall to filter outgoing traffic based on the destination IP address, protocol, and port.
CLI Commands
Several CLI commands can be used to manage the Proxmox firewall service, such as pve-firewall enable, pve-firewall
disable, pve-firewall status, and pve-firewall reload.
Remote IPs
You can manage access from remote IP addresses to bolster security. You can configure firewall rules for remote IPs using
the remote.fw file located in the /etc/pve/firewall directory.
Configuration Files
The Proxmox firewall is configured using several configuration files in the /etc/pve/firewall directory. These files define
firewall macros, security groups, IP aliases, and firewall rules.
HTTP Traffic
You can also filter HTTP traffic in your Proxmox environment. You can configure firewall rules for HTTP connections using
the http.fw file located in the /etc/pve/firewall directory.
Create Rules
When creating a firewall rule you need to edit the appropriate configuration file in the /etc/pve/firewall directory. The
syntax for creating a firewall rule is as follows:
iptables -A <zone> -p <protocol> –dport <port> -s <source address> -d <destination address> -j <action>
Wrapping up
The Proxmox firewall is a great tool admins can use to manage and control traffic to the Proxmox data center, Proxmox
hosts, and virtual machines and containers running in the environment. The firewall is based on the Linux iptables firewall
and is managed using several configuration files located in the /etc/pve/firewall directory.
The Proxmox cluster firewall rules are distributed in nature and synchronized between all cluster nodes. It is a great
capability that can effectively help secure workloads and Proxmox environments.
Proxmox Container vs VM features and configuration
September 29, 2022
Proxmox
Proxmox is an extremely versatile hypervisor that provides the ability to run a Proxmox Container vs VM as part of the
native functionality built into the platform. This is a great way to have the best of both worlds. It allows for solving multiple
applications challenges and multiple purposes with a single hypervisor. You can use these for production or test
environments.
Instead of running multiple virtual machines (VM workloads) to host services, you can run the LXC containers on the host
system for more efficient environments. Let’s explore the topic of Proxmox containers vs VM instances and see how
running virtual machines in Proxmox differs from Proxmox containers.
This can help with the performance of spinning up applications and setup access much more quickly to resources. There
are many reasons why one is preferred over the other. However, depending on the use case, one may be the best choice
over the other.
Docker containers are arguably the most popular container technology used in the enterprise today. They focus on running
applications and all their dependencies in a seamless, self-contained way and allow provisioning a single-purpose
application environment for running applications.
LXC containers are very much like a virtual machine, but significantly lighter weight since it is sharing the host kernel with
the LXC host. It does not require the disk space or other resources as full VMs.
LXC containers aim to align with a specific distribution of Linux. However, Docker containers aim to be distro-less and
focus on the applications and dependencies. Virtual machines have their own kernel instance as opposed to the shared
kernel instance with containers.
Many may not realize that Docker is actually a fork of Linux LXC containers. Both LXC and Docker share the same kernel
with the container host.
Container vs VM
A virtual machine can load any operating system you want inside the VM with its kernel instance and provides the best
isolation for running a server for resources. Containers share the kernel instance with the physical server Linux instance.
So the container operating system is shared with the host. Both have the hardware abstracted using virtualization
technologies. The user does not know they are accessing virtual machines or containers when accessing resources.
Overhead
The overhead of running multiple virtual machines is much more than the overhead of running multiple containers. If users
need access to a desktop or desktop resources, virtual machines are needed for this purpose. The speed to provision
containers is faster, and the effort involved is generally less involved.
Persistence
Virtual machines are generally considered persistent and have to maintain lifecycle management, etc. Whereas containers
offer the ability to have ephemeral resources. The time to boot a container is minimal.
You will see the choice in the menu for Create VM or Create CT on the host system. Again the main difference is you are
creating a full virtual machine or an LXC container.
Backups
In terms of backups, you can backup both containers vs VM in Proxmox VE. This is a great option since many solutions
allow backing up virtual machines but do not support containers.
Let’s look at the configuration steps to create Proxmox containers and see what configuration is involved. Incidentally, the
screens for creating a virtual machine are basically the same. so, we will look at the containers screens since these are
probably the least familiar Again, with containers, we are using a virtualization option that shares the same kernel instance
with the Proxmox host.
When you choose the New CT option, you will begin the Create: LXC Container wizard. Below you will see the first
screen has you define:
Node
CTID
Hostname
Privileges
Nesting
Resource Pool
Password
This screen helps establish the basics of connectivity, authentication, and a few other data configurations for the container
instance.
On the next screen, you choose the Proxmox containers template that will be used for spinning up the LXC container. As
you can see below, I have pulled down an Ubuntu 22.04 container image to spin up a new system.
Choosing storage
Next, we select the disk storage needed for the LXC container. Below, I have selected the storage for the container file
storage using the Proxmox tool.
Configuring the CPU settings
Next, we select the CPU resources, needed for the container. We can select the core value needed for the new container.
Configuring memory
We need to assign the memory value for the new container in Proxmox.
Network configuration
Now, we create network resources for the new LXC container running in Proxmox. The Proxmox containers can have all of
the normal virtual machines configuration we are used to, such as assigning a VLAN tag, IP address configuration, such as
static or DHCP and others as you would any other computer system running on Proxmox VE.
DNS configuration
Going along with the network configuration on the next screen we have the DNS configuration.
Confirming the creation of the new LXC container
Finally, we get to the point of finishing out the Proxmox VE configuration. Here we can review the
Accessing the console of the container for command line access
Below, you can easily access the container’s command line from the Proxmox VE web interface.
Converting virtual machines and containers to templates
In Proxmox VE, you can convert both virtual machines and containers to templates. Templates are a way to easily save a
copy with the configuration included for a virtual machine or a container so these can be quickly spun up from the template.
You can have Windows, Linux, and other operating systems converted to template and easily spin these up for quick
deployment from a common mount point.
What are Promox containers vs VM? Containers vs VM in Proxmox VE provides very robust and diverse capabilities that
allow solving many different challenges from a technical and business perspective.
What is the different between Docker vs. LXC containers? Docker is focused on applications and LXC containers are
focused on distributions and more VM-specific functionality.
Wrapping Up
Promox has a wide range of features. When looking at Proxmox container vs VM functionality, it covers it all. Using LXC
containers you can quickly spin up environments. Virtual Machines allow spinning up isolated environments with their own
kernel instance for the most isolation. However, containers are still a secure way to run applications and spin up
environments for users to access applications and resources.
Proxmox Containers with Fedora CoreOS Install
February 14, 2024
Proxmox
I recently looked at the installation of Fedora CoreOS on VMware. In the home lab, many are running Proxmox and maybe
more coming up will be switching from VMware over the course of 2024. Let’s take a look at the topic of running Proxmox
Containers with Fedora CoreOS setup.
Table of contents
Proxmox and Fedora CoreOS
Fedora CoreOS
Proxmox
Fedora CoreOS install on Proxmox
1. Clone the community repo
2. Enable snippets on a Proxmox VE storage repository
3. Run the included shell script
4. Configure the cloud-init settings for the created template
5. Clone the template VM to a new Fedora CoreOS virtual machine
Wrapping up Proxmox containers Fedora CoreOS install
Fedora CoreOS
While you can run LXC container (Linux containers) configurations and container template machines inside of Proxmox
natively, many developers and DevOps guys need access to Docker containers. In the home lab, most solutions you want
to self-host are readily available as Docker containers also without running full virtual machine instances with full operating
systems. So, running Docker is a great way to have access to these solutions.
Fedora CoreOS has as its focus, containerized infrastructure. If you look at the documentation, It offers an
automatically updating, minimal, container-centric operating system that natively runs Docker, Podman, and can run
Kubernetes as well also, with good support across the board. It is also immutable which means it has security
enhancements for customers running it for their container cluster.
So if you are looking for a clean, secure, and immutable OS to run your containers on top of Proxmox, CoreOS is a great
solution!
Proxmox
Running it on top of Proxmox has other benefits such as the ability to run Proxmox Backup Server solution to backup the
host virtual machines. It has powerful networking and monitoring. Businesses can even choose to have enterprise support
and home labbers can become a member of the Proxmox support forum (for search forums threads, requests, and share
things with others in the home forums such as issues and troubleshooting) and access to other Proxmox solutions, like
Proxmox Mail Gateway.
We need to clone the repository that contains the community script for deploying the Fedora CoreOS installation. You can
clone the following repository:
https://github.com/GECO-IT/fedora-coreos-proxmox.git
I did this directly from my Proxmox VE host. Just install the git tools if you haven’t already: apt install git -y
You will see the fedora-coreos-proxmox folder. If you cd inside the folder, you will see a vmsetup.sh wrapper script that
is the script we will run on the Proxmox VE server. The fedora-coreos-<version>.yaml serves as the ignition file for
CoreOS. The Ignition file configures all the required settings..
Cloning the repo down from the community to install fedora coreos in proxmox
You will see the lines I have changed below in the top part of the script. I have changed
the TEMPLATE_VMSTORAGE and SNIPPET_STORAGE to the locations I wanted. Also, the vmsetup.sh script was quite
a bit behind on the Fedora CoreOS version it was looking to deploy. So, I updated that to the latest at the time of writing in
the file below.
Changing the template and snippet storage in the shell script
Make a connection in a browser to the URL of your Proxmox web UIIf you want to go with the default local location, or any
other location, navigate to Datacenter > Storage > “your storage” in the menu navigation, then click EDIT.
Add Snippets to the Content dropdown. Then click OK. You can also create a new folder and enable it with the snippets
content type if you want something specific set aside for this use case. Just create a new folder, enter a description if you
like, and make changes to the vmsetup.sh file.
Enabling snippets on the local storage repository
It will automatically download the Fedora CoreOS image in the QEMU QCOW format needed. The Fedora CoreOS
container templates in Proxmox streamline the deployment process. The Fedora CoreOS template allows for new
container creations and uses the configuration files for settings specific to Fedora CoreOS to ensure smooth operation
within the Proxmox environment.
Running the vmsetup.sh script
The process should complete with the message at the bottom: Convert VM 900 in proxmox vm template.
The vm is converted to a template
If you hop over to the Proxmox web interface, you will see the new virtual machine template.
User
Password (passwd)
DNS domain
DNS servers
SSH public key
Upgrade packages
IP Config (defaults to the default Linux bridge)
The clone task completes successfully. As you can see, the process to spin up quick Dev workloads for app development,
websites, working with source, etc, is easy.
The cloning task is successful for cloning a new proxmox fedora coreos installation
Now, we boot the new virtual machine and it boots. We can see the OS loading the config from the Ignition file.
Booting the new fedora coreos installation in proxmox
After the machine fully boots and grabs an IP address, I log into the VM on the console, and I used the linuxadmin user I
had specified in the cloud-init settings.
Success! I can login with the new linuxadmin user, showing the VM has used our cloud-init settings.
Logging into the fedora coreos virtual machine
The Fedora CoreOS installation already has Docker preinstalled, so we can create containers, including system containers
and application containers immediately after cloning over new VMs. Now all we need to do is start spinning up our
containers.
Fedora coreos comes out of the box ready to run docker containers
Fedora CoreOS applies SELinux and auto-updates to enhance security. It isolates containers effectively, using the host
kernel safely, ensuring a secure container environment within Proxmox.
Fedora CoreOS’s minimal design and auto-update capabilities make it ideal for Proxmox, ensuring a lightweight, secure
base for containers. Its compatibility with container orchestration tools like Kubernetes simplifies management.
Yes. Fedora CoreOS supports Docker, allowing for a smooth transition of Docker containers to your Proxmox setup,
maintaining flexibility across different container technologies.
Proxmox’s container templates provide ready-to-use Fedora CoreOS images, simplifying setup. They enable quick
deployment, ensuring containers are configured with the necessary settings from the start.
What are LXC containers in Proxmox?
LXC containers are a Linux container instance that provide a very “full operating system-like” experience” without the need
to run a full virtual machine for running other operating systems.
Proxmox allows for easy storage and network adjustments via its web interface or CLI. For Fedora CoreOS containers,
settings can be tailored during setup or altered later to meet changing demands.
Keep Fedora CoreOS templates updated and watch for new releases. Automatic updates in Fedora CoreOS help keep
your system secure with minimal manual effort.
Yes, Fedora CoreOS is designed for containers, making it equally or more efficient than traditional Linux distributions in
Proxmox environments by optimizing resource use.
Leverage Proxmox’s backup tools or integrate third-party solutions to secure Fedora CoreOS containers, ensuring data
protection and quick recovery in case of data loss.
Specific application needs might highlight Fedora CoreOS limitations and require troubleshooting, such as software
compatibility or resource requirements. Evaluating these aspects early helps tailor the Proxmox environment to your
needs.
Proxmox is an open-source virtualization platform that allows users to create and manage virtual machines and containers.
One of the benefits of Proxmox is its ability to automate tasks using helper scripts. Helper scripts are small programs that
automate routine tasks, such as backups and migrations, and make managing it much easier.
This blog post will discuss Proxmox scripts, how they work, and some examples of Proxmox helper scripts you can use to
automate tasks in your environment.
Table of contents
What are Scripts?
How do Scripts work?
Examples of Scripts
Backup script
Migration Script
Firewall Script
Benefits of Scripts
Scripts FAQs
Wrapping up
Helper scripts are typically run from the command line and can be run manually or scheduled to run automatically at
specific times. Helper scripts can also be integrated with other tools like monitoring software to provide additional
functionality.
Examples of Scripts
There are many great scripts from third-party sites that are easy to find. Sourcing scripts from blogs, videos, and other
resources is a great way to find scripts useful for VM or LXC management in your Proxmox environment. However, there
are many other types of useful Proxmox scripts, including the following without the need to install any components or have
any other prerequisites installed:
Backup script
One of the most common tasks that users perform in a Proxmox environment is creating backups of virtual machines. The
Backup Script automates this task by creating backups of selected virtual machines and storing them in a specified
location.
The script prompts the user for the name of the virtual machine to be backed up and the location where the backup should
be stored. Once the necessary information is provided, the script uses the API to create a backup of the specified virtual
machine and store it in the specified location.
Note the following code examples. Please note that these scripts are just examples and may need modification to work in
your specific environment without error when loading. Additionally, it is important to always test scripts in a non-production
environment before running them in a production environment.
Backup Script:
#!/bin/bash
read -p "Enter the name of the virtual machine to be backed up: " vm_name
read -p "Enter the backup file name: " backup_file
Another common task in an environment is migrating virtual machines from one host to another. The Proxmox Migration
Script automates this task by migrating selected virtual machines from one Proxmox host to another.
The script prompts the user for the name of the virtual machine to be migrated and the name of the target host. Once the
necessary information is provided, the script uses the API to migrate the specified virtual machine to the specified target
host.
Migration Script:
#!/bin/bash
read -p "Enter the name of the virtual machine to be migrated: " vm_name
read -p "Enter the name of the target host: " target_host
Firewall Script
Proxmox Firewall Script is a script that automates the configuration of the firewall rules efficiently in an environment. The
script prompts the user for the IP address and port number to be blocked or allowed. Once the information is provided, the
script uses the Proxmox API to configure the firewall rules accordingly.
Firewall Script:
#!/bin/bash
Benefits of Scripts
Script automation is a powerful tool that automates routine tasks in a Proxmox environment. These scripts can be written in
various programming languages, and they interact with the Proxmox API to perform tasks such as creating backups,
migrating virtual machines, and managing network configurations.
Users can save time and streamline their workflows by using helper scripts. With the examples in this blog post, you can
create helper scripts to automate tasks in your Proxmox environment. You can create custom scripts that meet your
specific needs with some programming knowledge.
Additionally, many online resources provide pre-built scripts that you can use to automate tasks. Proxmox provides a
GitHub repository containing a collection of useful helper scripts you can use as a starting point for your own scripts.
Additionally, there are many online communities, such as the forum, where users share their scripts and offer support to
others.
Scripts FAQs
Why are helper scripts important? Scripts are a great way to introduce automation into your environment. Using
scripting and automated tasks helps to make operations much more streamlined, effective, and repeatable.
What technologies can you use for Scripts? You can use built-in Bash scripting for automation, Ansible configuration
management, or even PowerShell works well for automated environments.
Why use Proxmox in your environment? It is a great hypervisor with many features and capabilities for running home
labs or production workloads.
Wrapping up
Proxmox helper scripts are essential for managing and automating tasks in an environment. By leveraging the power of the
API, users can create custom scripts that automate routine tasks and save time.
Whether you are a seasoned user or a newcomer to virtualization, learning how to use helper scripts can help you
streamline your workflow and get the most out of your Proxmox environment.
Proxmox scripts PowerShell Ansible and Terraform
January 20, 2023
Proxmox
Proxmox is growing more and more popular, especially for home lab enthusiasts and those looking to spin up labs based
on totally free and open-source software. Proxmox has a great API that allows throwing automation tasks at the solution
and creating Proxmox helper scripts for automating your Proxmox environment.
Infrastructure as code
Due to the massive shift to cloud-based technologies, today’s infrastructure services is driven by infrastructure as code. It
allows admins to commit code to a code repository location, version of that code, and other resources it manages.
Proxmox VE uses a ticket or token-based authentication. All requests to the API need to include a ticket inside a Cookie
(header) or send an API token through the Authorization header.
Wrapping up
Hopefully, this quick guide to Proxmox scripts with PowerShell, Ansible, and Terraform shows there are many great ways to
automate Proxmox and create infrastructure as code in your Proxmox VE environment. With the RESTful API driven
automation provided by Proxmox, you can create quick and easy infrastructure as code.
Proxmox Backup Server: Ultimate Install, Backup, and Restore Guide
December 4, 2023
Proxmox
Backups are essential to running Proxmox VE in the home lab or production to avoid data loss. Proxmox Backup Server is
a free solution to back up and recover Proxmox VE VMs and containers.
Table of contents
What is Proxmox Backup Server?
Proxmox Backup Server features
Proxmox Backup Server installation step-by-step instructions
Logging into the Proxmox Backup client interface
Adding a datastore for storing backups
Add the Proxmox Backup Server instance to your Proxmox VE server
Creating a backup job
Restoring a Proxmox virtual machine from backup
Granular file restore
Frequently Asked Questions
Proxmox Backup Server as a Comprehensive Solution for Backup and Restore
Since many commercial enterprise backup solution products don’t protect Proxmox, it is great to see that Proxmox has a
backup server to protect your Proxmox VE virtual machine workloads. Also, you can run it on your own hardware and
select your hardware based on your own resource usage. So, the specific CPU, memory, disk, and hardware platform you
choose may vary according to your strategy and needs.
Efficient Data Backup and Recovery: It provides reliable and quick backup and restoration of virtual machines,
containers, and physical hosts. PBS also features Incremental backups. With incremental backups, only the
changes changes made since the last backup are stored.
Incremental Backup Support: Only backs up data that has changed since the last backup, to reduce backup time
and storage requirements.
Data Deduplication: Reduces the storage space required for backups by only storing unique data blocks.
Data Backup Encryption: uses encryption during transfer and at rest.
Web user Interface: Using a web browser, you can manage backups, restore data, and configure settings.
Compression Options: Supports data compression to further reduce the storage space needed for backups.
Snapshot Functionality: Allows for creating snapshots of data, enabling point-in-time recoveries.
ZFS Support: Integrates with ZFS (Zettabyte File System) for efficient storage management and high data integrity.
Flexible Storage Options: Supports various storage backends, including local directories, NFS targets, and
SMB/CIFS.
Replication: You can replicate your backup data to remote sites. This helps to create a 3-2-1 disaster recovery
model.
Role-Based Access Control: Allows granular control over user access and permissions.
Backup Scheduling: Automates the backup process through customizable scheduling.
Email Notification System: Sends automated email notifications regarding backup jobs and system status.
API for Automation: Provides a REST API for easy integration with other systems and automation of backup tasks.
Support for Multiple Clients: Compatible with various clients, including Proxmox VE, Linux, and others.
Backup Retention Policies: Customizable retention policies to maintain a balance between storage space and
backup availability.
Bandwidth Throttling: Manages network load by controlling the bandwidth used for backup operations.
Plugin System for Extensibility: Supports plugins for extending functionality and integrating with other systems.
Backup Verification: Includes features to verify the integrity of backups, ensuring recoverability.
Proxmox VE Integration: Seamlessly integrates with Proxmox Virtual Environment for centralized management of
virtualized infrastructure and backups.
First, you will need to download the PBS release ISO image from Proxmox here: Proxmox Backup Server.
Once you have the ISO file, “burn” the software to a USB flash drive or upload it to your Proxmox VE host if you are
hosting your backup server as a virtual machine.
Below is a screenshot of the Proxmox Backup Server virtual machines booting from the ISO installation.
Running the PBS installer
After the Proxmox Backup Server boots, you will see the default text splash screen directing you to open a browser to start
managing the server. Note the port 8007, which is different from the Proxmox VE port 8006 for accessing the GUI. This is
the command line interface (CLI) from the console.
Proxmox backup server login
This will launch the Add Datastore dialog box. Name the new datastore and then enter the backing path. You don’t have
to create the backing path location as the process will do this for you. Click Add. This will add the backup storage location
to local storage on your Proxmox Backup Server.
Naming the datastore and setting the path
Navigate to Datacenter > Storage > Add > Proxmox Backup Server.
Adding PBS storage to your proxmox ve instance
This will launch the Add: Proxmox Backup Server dialog box. Here we enter the following information:
ID
Server
Username
Password
Datastore
Fingerprint
Navigate back to your Proxmox Backup Server and click on the Dashboard > Show Fingerprint button.
Show fingerprint on PBS
After adding the fingerprint and clicking the Add button, we see the Proxmox Backup Server listed.
You might think we would do this from the Proxmox Backup Server side. However, we create the Proxmox backup job from
the Proxmox VE host. Proxmox Backup Server offers advanced features like snapshot mode and customizable backup
retention policies.
On the general tab, we can set all the main backup option selection for our Proxmox backup job. This includes:
On the retention screen, you can configure all things retention and archive related, including:
You will also note, that you can navigate to the Proxmox VE host virtual machine and select Backup and you will have the
option to backup your VM from here. Make sure you choose your PBS storage location.
Confirm your backup storage
After you run the backup, you can see the vzdump backups details for the VM by selecting your PBS storage location.
Look at the backups on your remote PBS datastore
You can select to start the VM and also perform a Live restore.
Choosing to overwrite with a live restore and power on the vm
The task will progress and should finish successfully. The speed will depend on the network bandwidth you have between
your Proxmox VE host and your Proxmox Backup Server.
The restore task completes successfully
You can also click the File Restore button to restore individual files from the backups.
Running a file restore for a proxmox virtual machine
Proxmox Backup Server improves data security with features like backup encryption and secure data transfer protocols.
This means your backups are protected from unauthorized access and potential threats.
Can I use PBS for backing up physical servers as well as virtual environments?
Yes, it can handle backups for both virtual machines and physical hosts. However, it seems this functionality as
documented from Proxmox is more on the roadmap for better integration and functionality. It is mentioned you can use the
Proxmox Backup Server client to back up a physical host. Check the thread here: PBS: How to back up physical servers?
These save time and storage space. By only backing up data that has changed since the last backup, the load on the
network and storage is reduced.
Proxmox Backup Server allows you to automate backup jobs through its scheduling feature. This automation ensures
regular backups without manual intervention, making backup management more efficient.
In the web interface, you can configure backups, settings, and restore data. In the interface you can configure network
traffic settings, user access, and 2FA.
How does Proxmox Backup Server handle network load during backup operations?
It includes features for managing network load, such as bandwidth throttling. This ensures that backup operations do not
overwhelm the network, maintaining optimal performance.
What are the best practices for configuring user permissions in Proxmox Backup Server?
Configuring user permissions involves assigning roles and access rights based on user responsibilities and requirements.
This ensures that users have appropriate access to backup functions while maintaining data security.
With features like the file restore button and snapshot mode, it enables quick and efficient data restoration. These
capabilities are crucial for minimizing downtime in case of data loss.
With the release of Proxmox 8.1, Proxmox introduced new networking features in the way of Proxmox SDN, or “software
defined networking” that is fully integrated out of the box for use in the datacenter. Thanks to virtualization infrastructure,
Software defined networking allows taking networking into software without having the need for physical network devices
to spin up new networks, subnets, IP ranges, DHCP servers, etc. Proxmox SDN allows creating these virtualized network
infrastructures. This post will look at Proxmox SDN configuration step-by-step and how it is setup.
Table of contents
Introduction to Proxmox SDN
Comparison with VMware NSX
Use Cases of Proxmox SDN
Prerequisites
Setting Up Proxmox SDN
1. Create a Simple SDN Zone
2. Create a VNet
3. Create a Subnet and DHCP range
4. Apply the SDN configuration
Connect Virtual Machines and Containers to the SDN network
Key points to remember
Wrapping up Proxmox SDN configuration
Also, you can create your own isolated private network on each Proxmox VE server and span this to networks across
multiple Proxmox VE clusters in many different locations.
Prerequisites
While Proxmox version 8.1 has the SDN components preloaded and the integration is available, according to the
documentation, you will need to load the SDN package in Proxmox 7.X for every node in the cluster config:
apt update
apt install libpve-network-perl
After installation, you need to ensure that the following line is present at the end of the /etc/network/interfaces
configuration file on all nodes:
source /etc/network/interfaces.d/*
Proxmox requires the dnsmasq package for SDN functionality to enable features like DHCP management and network
addressing. To install the DNSmasq packages:
apt update
apt install dnsmasq
# disable default instance
systemctl disable --now dnsmasq
To Install Proxmox SDN as a simple network, we will do that in the following order:
There are a few types of Zones you can create. These include:
Simple: The simple configuration is an Isolated Bridge that provides a simple layer 3 routing bridge (NAT)
VLAN: Virtual LANs enable the traditional method of dividing up a LAN. The VLAN zone uses an existing local Linux
or OVS bridge to connect to the Proxmox VE host’s NIC
QinQ: Stacked VLAN (IEEE 802.1ad)
VXLAN: Layer 2 VXLAN network that is created using a UDP tunnel
EVPN (BGP EVPN): VXLAN that uses BGP to create Layer 3 routing. In this config, you create exit nodes to force
traffic through a primary exit node instead of using load balancing between nodes.
First, we need to create a new Zone. For this walkthrough, we will just be creating a Simple Zone. Login to your Proxmox
node in a browser as root for the proper permissions. At the datacenter level, navigate to SDN > Zones > Add.
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaCreating a new zone in proxmox sdn
The SDN Zone configuration also allows you to set the zone for automatic DHCP configuration that will allow your VMs to
pull an IP address from the VNet and Subnet configuration we will setup below. You can also set the MTU value for the
size of the ethernet frames (packet), and DNS configuration, including DNS server, DNS zone, etc. In this example, I am
creating a SDN Zone called sdn01.
The MTU value is important to note as with VXLAN, it uses 50 bytes to encapsulate the packet, you need to reduce the
size by 50 bytes less than the normal MTU value. Optional will default to a size of 1450 on auto. In the case of VXLAN with
IPSEC security, customers will need to reduce the MTU by 60 with IPv4, or 60 for IPv6 for guest traffic or you will see an
issue with connectivity that may be a problem that is hard to uncover.
aaaaaaEnabling automatic dhcp
2. Create a VNet
Next, we need to create a VNet in PVE. Navigate to the VNet menu under the SDN menu and click to Create a new VNet.
Beginning the process to create a new vnet
Create a name for the VNet and select the Zone we created above. You also have the option to make these VLAN aware
with a tag and also create an alias.
Configuring the new vnet in proxmox sdn
After creating the VNet, we can create a Subnet. Click the Create button on the Subnets screen.
Creating a new subnet in proxmox sdn
Enter your IP address CIDR information and Gateway. If you populate the Gateway here, your Proxmox server will assume
this IP address. Also, you can check the SNAT box. This will allow your VMs connected to the SDN network to easily
connect to external networks beyond the SDN network (aka the Internet and your physical network) by masquerading as
the IP and MAC of the host. Click Create.
Creating a new subnet
Click on the DHCP Ranges and enter your start and end address for the DHCP range. It will hand out addresses from this
range of IPv4 IPs.
Creating a dhcp range in proxmox sdn
After clicking OK, we will see the new VNet and Subnet displayed.
Looking at the vnets and subnets created
We are not setting anything in the Options screen or IPAM. However, let’s take a look at what those screens look like.
Under the Options screen and the Controllers section, we can add network controllers for more advanced configurations
like VXLAN to configure network tunnel configurations between peers, which are the Proxmox nodes. Under the
Controllers section, we can add EVPN, EBGP, and ISIS.
For BGP controllers, these are not used directly by a zone. You can configure FRR to manage BGP peers. BGP-EVPN
configuration define a different ASN by node. When you click the controller dropdown, you will see a list of options.
Looking at controllers and options available in proxmox sdn
Looking at ipam
It is very important to understand that creating the configuration we have created does not apply the configuration. It
only stages the configuration so to speak. You need to click the SDN parent menu and click the Apply button.
Apply the proxmox sdn configuration
Now we see the new SDN network status after the configuration is applied and the Proxmox networking services are
restarted.
Viewing the new configuration applied in proxmox
Below, you see the summary screen of creating a new virtual machine and we see I have connected it to the new SDN
network.
Summary of new vm creation details
After installing Ubuntu, the VM correctly grabs a DHCP address from the range configured. Also, we can ping the gateway
that was established in the configuration. Keep in mind how cool this really is. We have a network with total separation
from the other physical network technologies for VM traffic and it is totally defined in software.
New virtual machine pulls a dhcp address from proxmox sdn
Network interfaces are the gateways between your virtual machines and the broader network (Internet). Make sure to give
attention to detail to configure these correctly for proper connectivity and optimal performance.
VLANs enable you to segment your network into isolated sections. With VLANs you can create a secure, organized
network zones.
VXLAN zones extend VLAN capabilities and create overlay networks across even different physical network locations.
With VXLAN, you can build a complex, scalable network architecture.
Some of the advanced Proxmox SDN features include automatic DHCP assignment to IP address management.
Understand how you can use these features to enhance your network management.
Creating virtual zones within Proxmox SDN allows network traffic segregation. This enhances the security and
performance of your network. Traffic isolation is crucial for security.
Proxmox SDN is easy to configure and you can create a simple new network as shown in the walkthrough to start playing
around with the new feature in your home lab. Let me know in the comments or VHT forum if you have played around with
Proxmox SDN as of yet and what use cases you are finding in the home lab.
pfSense Proxmox Install Process and Configuration
August 26, 2022
Proxmox
Many great open source solutions are available these days for many use cases, including security, networking, routing, etc.
Two of those include pfSense and Proxmox server. Proxmox VE is an open-source solution that you can easily download
for free and run a pfSense VM for routing, virtual network interfaces, firewall capabilities, etc. Let’s deep dive into the
process of pfSense Proxmox install process and configuration and see what steps are involved.
pfSense on Proxmox installation and
configuration - Step-by-step
https://youtube.com/watch?v=mwDv790YoZ0
You can also run a Proxmox cluster for the highest availability requirements and for failover purposes.
Running pfSense on Proxmox server, pfSense Proxmox, is a great way to have powerful features for no cost, running on
commodity bare metal hardware. Proxmox provides many enterprise hypervisor features, including backups that can be
enabled for newly created virtual machine boxes running in Proxmox server.
Proxmox hosts can run on a bare metal server or run as a virtual machine itself. If you would like to see how to run
Proxmox Server as a nested VMware virtual machine, check out my post here: Nested Proxmox VMware installation in
ESXi – Virtualization Howto
What is pfSense?
First of all, what is pfSense? The pfSense solution is a secure and widely used firewall distribution that is available as a
virtual machine appliance or running on hardware platforms from Netgate.
Either way, you can get network interfaces either in hardware or virtual machine network interfaces, allowing you to route,
firewall, and connect traffic to your network as you would any other enterprise firewall solution.
Netgate hardware
After you click Upload, you will see the upload progress. Then, the screen below should display, noting the upload of the
ISO image was successful for pfSense.
Note the following tabs and how they are configured with the new pfSense VM.
On the general tab, configure a name for the new pfSense VM.
Configure a name for the new pfSense VM
OS settings
On the OS tab, here is where we select the ISO image that we uploaded earlier.
Select the pfSense ISO image
System tab
Disks
On the disk screen, you select where you want to install pfSense, the disk size, bus device information, etc.
Select the storage location for the pfSense VM in Proxmox
CPU tab
On the CPU tab, you can configure the number of CPU sockets and cores.
Configure CPU settings
Memory tab
On the memory tab, you configure how much memory you want to allocate to the pfSense VM.
Configure memory settings for pfSense
Networking Tab
On the network tab, you configure the network interfaces you want to use for your pfSense VM running on your Proxmox
host. There are differences to think about depending on whether you are running pfSense on physical hardware with
physical interface ports or a virtual machine running pfSense.
Here, on the creation screen, we can just accept the defaults and then we will change a couple of settings once we have
the VM created. Note on the screen the settings you can configure, including bridge ports, VLAN tag, firewall, model, MAC
address, etc.
Configure the network settings for the pfSense VM
Confirm tab
On the confirm tab, we can confirm the settings used to create VM for pfSense.
Confirm the configuration settings for the new pfSense VM
After you click create VM of the pfSense VM, this essentially creates the pfSense virtual machine so we can install pfSense
as a guest OS on in the Proxmox box VM.
The WAN interface will house the WAN IP address that will provide connectivity from the outside inward for accessing
internal resources and provide Internet connectivity. These WAN and LAN interface connections will allow successfully
routing traffic as expected and benefiting from the pfSense firewall.
Add a new network adapter
On the new interface, select the bridge ports, VLAN tag, and other settings for the second network adapter. By default, it
will add virtIO interfaces when you add a new adapter. You may need to play around with this when adding. I had to go
back and change my installation to Intel Pro 1000 adapters for it to work correctly in my nested lab.
I also added an additional network bridge where you can choose a new Linux bridge configuration.
Add a new network adapter after creating the pfSense VM
After adding an additional network device, we now have two network devices configured with the pfSense VM.
Install pfSense VM
Now we can actually install pfSense and configure the virtual machine appliance. Right-click the pfSense VM shown on
your Proxmox host and select start.
After powering on and pfSense running as a VM, we can begin the process to run pfSense as an installed pfSense version.
Boot screen of the pfSense VM running in Proxmox VE
This begins the “text” install pfsense VM process. Accept the EULA displayed.
Choose to configure the partitioning unless you need a custom layout Automatically. Here I am choosing ZFS
configuration.
Selecting how you would like to partition your disk in Proxmox VE
Choose the virtual device type. Here I am selecting the Stripe no redundancy.
Select the virtual disk type
You will be asked if you have any manual configuration you want to perform. If not, select No.
Installation is finished and choosing no custom modifications
After the pfSense VM boots for the first time, you should see your WAN and LAN interfaces come up and show IP
addresses for the WAN and LAN ports. As you can see, these are not on the same network or same subnet.
Most configurations will see the WAN IP address configured from the ISP via DHCP server. You will want to have a static
IP address configured on the LAN interface since this will be used as the gateway address for clients connected to the LAN
port of the pfSense VM.
The pfSense LAN address is configurable and you will want to configure the address to match your clients. The LAN port
also doubles as the management port for pfSense VM by default. You can’t manage pfSense from the WAN port by default,
only the LAN port. This can be changed later, but is something to note as you run the pfSense virtual machine on your
Proxmox box.
The pfSense firewall will also be the default gateway for the clients on the network. The pfSense WAN is the address used
for incoming traffic that will be NAT’ed inward to internal IP addresses on the network. For management, specifically note
the LAN ip address.
Below, you will note I have private IPs on both the WAN and LAN port. This is because I have this configured in a lab
environment. In production, you will have a public IP address configured on the WAN port for true edge firewall capabilities.
admin/pfsense
Logging into pfSense VM for the first time
After logging in with the default admin password, the configuration wizard will begin to run pfSense, including the pfSense
firewall capabilities.
Beginning the pfSense web UI setup wizard
Configure the WAN interface. Even though we have already configured this, the pfSense wizard gives you another
opportunity to configure the WAN port.
Configure WAN interface in pfSense
Same with the LAN port. You can reconfigure if needed here.
Configure the LAN interface
At this point after the reload, the install pfSense process is now complete.
Wizard completes after the reload of pfSense
Wrapping Up
The pfSense Proxmox installation procedure is straightforward and consists of creating a new Proxmox virtual machine
with the correct network adapter settings. Then you power on the VM, run through the initial text configuration setup to
install pfSense and establish basic networking connectivity. Afterward, using the pfSense web GUI, you finalize the pfsense
installation on Proxmox using the configuration wizard. Proxmox makes for a great platform to install pfSense as Proxmox
provides many of the settings and configuration capabilities needed to customize your installation of pfSense Proxmox.
Nested ESXi install in Proxmox: Step-by-Step
December 21, 2023
Proxmox
If you have a Proxmox VE server in your home lab or production environment and want to play around with VMware ESXi,
you can easily do that with Proxmox nested virtualization. Let’s look at the steps required for a nested ESXi server install in
Proxmox.
Table of contents
Nested Virtualization in Proxmox
Preparing your Proxmox VE host to enable nested virtualization for ESXi
Creating the ESXi VM in Proxmox
Step-by-Step Installation of Nested ESXi
Managing Virtual Machines in a Nested Setup
Using advanced features in nested VMs
Troubleshooting Common Issues in Nested Environments
Frequently Asked Questions About Nested ESXi in Proxmox
Now, you can use something like VMware Workstation to easily nest ESXi. However, if you already have a dedicated
Proxmox host, it is a better platform for a dedicated lab experience. There is always running it on VMware ESXi if you have
a physical VMware host.
Proxmox nested virtualization allows exposing the CPU’s hardware virtualization characteristics to a nested hypervisor.
This process to expose hardware assisted virtualization to the guest ESXi VM is required so the nested hypervisor can run
virtual machines.
An overview of the few steps exist to enable nested virtualization for Proxmox and run a nested VM hypervisor are as
follows:
Upload your VMware ESXi 8.0 U2 or other ESXi ISO to your Proxmox server and select this in the wizard. On the type,
choose Other for the guest operating system.
Select your esxi iso image under the os tab
On the Disks screen, configure the disk size you want and also the Storage location for your VM files and hit Next.
Setting up the storage for the esxi vm
Ok, so this is the step that surprised me a bit. I here selected Intel E1000 which is a standard Intel driver. But I will show
you what happens during the install.
Setting the network adapter to e1000
OK, so I told you there was something unexpected happen with the Intel E1000 driver. It didn’t detect the network adapter
in ESXi.
No network card detected in esxi
I powered the ESXi VM down and went back and selected VMware vmxnet3 adapter for the model.
Now, the network adapter was recognized and the installation proceeded.
The installation of nested esxi continues
Now for the standard screens, but we will show them anyway. Accept the EULA.
Accept the eula 1
I am running on an older Xeon D processor so we see the alert about an outdated processor that may not be supported in
future releases. You will see the same error on bare metal.
Warning about older cpu support in esxi 8.0 update 2
Confirm the installation of esxi and repartitioning
Hopping back over to Proxmox, I remove the ESXi ISO before rebooting.
After the nested ESXi installation boots, we see it has correctly pulled an IP address from DHCP so the network adapter is
working as expected.
Vmware esxi vm in proxmox boots and it correctly pulls a dhcp address
Below, I logged into the VMware host client to manage the ESXi host running in Proxmox.
Logged into the esxi host client
If you are configuring a cluster of ESXi hosts with vCenter, you can utilize features like vMotion and DRS within a nested
VMware vSphere cluster.
First, though, you need to understand Proxmox VLANs. I just covered this recently as well. So, check out my post on
Proxmox VLANs to first understand how to configure VLANs in Proxmox.
Just remember, on the nested VMware ESXi side, you can’t tag VLANs on your port groups as this will lead to “double
tagging”. They will instead assume the tag from the Proxmox side.
What I like to do is set up the Proxmox Linux Bridge as a trunk bridge, which is the default configuration when you make it
VLAN aware. Then, you can change the tag on your network adapter configured for your VMware ESXi VM to tag the
traffic from the ESXi VM.
Frequently Asked Questions About Nested ESXi in Proxmox
How Does Nested ESXi Differ from Regular Virtualization in Proxmox?
Nested ESXi in Proxmox takes virtualization a step further by running a virtual machine (VM) within another VM. In nested
setups, ESXi acts as a guest hypervisor within the VM to create and manage additional VMs in this second layer of
virtualization.
Yes, VMware Tools can be installed and run within a VM running on nested ESXi in a Proxmox environment. This
installation enhances the functionality and performance of the nested VMs. It provides better hardware compatibility and
improved management capabilities.
What Are the Key Considerations for VM Hardware Settings in Nested Virtualization?
When configuring VM hardware in a nested virtualization setup, it’s important to allocate sufficient resources, such as CPU
and memory, to ensure smooth operation. Additionally, you should enable promiscuous mode in the virtual switch settings
to allow communication between nested VMs.
Not really in most scenarios. You definitely won’t be supported by VMware in a nested environment and likely not Proxmox
either. It is best to keep nested environments in their proper place, for learning and labbing and testing out configurations
without the physical hardware to install on bare metal.
Give attention to resource allocation, enabling hardware-assisted virtualization, and configuring network settings properly.
Monitor your Proxmox VE host and nested ESXi VMs to make sure there are no performance issues.
Windows Server can be run as a guest operating system in a nested ESXi VM. This setup allows for testing and
development of Windows-based applications in a controlled, virtualized environment, leveraging the capabilities of both
Proxmox and ESXi.
Are There Specific Network Configurations Required for Nested ESXi in Proxmox?
Nested ESXi in Proxmox requires specific network configurations, including setting up virtual switches and enabling
promiscuous mode to allow proper network traffic flow between nested VMs. Proper configuration ensures seamless
connectivity and communication within the nested environment.
Using Intel VT-x in nested virtualization enhances the performance of nested virtualization. This technology enables more
efficient emulation of hardware features. Really, you don’t want to use nested virtualization without it.
Wrapping up
Hopefully, this blog post has been a help to any who are running Proxmox as your hypervisor running your home lab
environments. It is easy to get a virtual machine running with VMware ESXi in a Proxmox nested environment. Keep in
mind the need to use the VMware vmxnet3 adapter and the note on Proxmox VLAN tagging. If you are running guest VMs
in your ESXi VM, you will also need to keep in mind the need to enable promiscuous mode for your Proxmox bridge.