Proxmox Storage

Published on February 2, 2024 at 3:50 am by LEW

Introduction

Proxmox is the proverbial onion metaphor. And to get a handle on how storage works, we need to peel back a few layers. We should expect some layered complexity in the storage model, after all we are dealing with requirements for both the host (main) application and all the guest application (Virtual machines and Containers) running under the host. Additionally the varying partition structures, drive arrays,  and file systems will also increase the complexity of the system.

And yes, Proxmox has a certain level of complexity when dealing with storage, but it is also fairly flexible. To fully utilize Proxmox, we need to put forth the effort to learn how storage is setup and managed.

To this end, we need to look at the various layers of Proxmox, and how storage relates to each layer. Because in Proxmox, storage is layered also.

Storage Concepts

Before getting to much further, we need to define a few acronyms and terms you are likely to come across.

Physical Volume (PV): This is the actual storage drives, or parts of them where data is stored.

Volume Group (VG): A collection of one or more Physical Volumes.

Logical Volume (LV): A data container stored in a Volume Group.

Storage Pool (SP): A special type of Volume Group that can store logical volumes without allocating the full size.

The Outer Layer

The very top layer in Proxmox is the Datacenter layer. Proxmox is built on a cluster model, which might not be obvious if you are running just a single instance. The Datacenter is the cluster. Within the cluster are the nodes, which are the actual Proxmox instances. For example if you are running multiple instances of Proxmox on different computers, you can set up a cluster, and each instance will appear under the Datacenter.

In this post we will focus on a single node instance of Proxmox, but it is important to understand what nodes are and what the relationship between the nodes and the Datacenter is. On occasion we may refer to a running instance of Proxmox as a node. So consider Nodes the second layer.

If we go to the DataCenter, and look at the summary, we can see how many nodes we have, and if they are online or not. We can also see how many VM’s and Containers we have and if they are online or not. Further you can look at CPU, Memory, and storage for each node.

Since each node is a separate PC, and has its own resources, said resources cannot be combined.  However, there is a caveat for storage being used cross node (as long as only one VM/Container is attempting to use it).

I have not yet worked with clusters. Since I am running two Proxmox instances, I do want to eventually set them up as a cluster. There are a few possilbe issues with a two node cluster, but that is out of scope here, as what is reflected in this post is for a single node.

Since we are talking storage here, lets drop down to Datacenter > Storage in the tree menu. Here we can see the storage that is setup for use. If this is a fresh install of Proxmox, there should be two entries already defined; locale and locale-lvm. Lets examine these.

Select locale, and click the edit button. This will open the Edit Directory Window (note the title of the window will change depending on the storage type). We want to be on the Directory tab (Backup Retention is out of scope for this post). Directory storage is plane old Linux basic file storage. We give it a name, and we assign a mount point in our file tree, and we select what kind of stuff we want to store in it (remember this part as we come back to it latter at a lower level). Since we are talking about the Datacenter, we are addressing a limited number of items that can be stored by ProxMmox.

Entering the console for the node, we find that the partition we attached to, even though it is in a Logical Volume Group (LVM) is an ext4 partition. And we can navigate to the directory. Here you should see various folders matching what we set the directory up to store. You can also find out how much space is available in this partition. Note that this is the root partition setup on installation. If you query the size, you will find it substantially smaller than the storage drive size.

Which brings us to the local-lvm storage entry. If one examines the drive structure from the console with lsblk, one will see that the rest of the root drive is taken up by another LVM volume. If you select the entry and the edit button, you will see that it is part of the pve (Proxmox Virtual Environment), and is designated for Containers and VM’s. This was also created at install. And it is where, by default, any VM’s or Containers you create will be stored.

You may have noticed the Add button. This allows you to add additional storage. If you select this button, you will note a lot of file systems like LVM, BTRFS, and ZFS in addition to the plain Directory. Note to use any of these options you have to have available storage drives that are properly prepared (this will be discussed at the Node layer). Drives need to be prepared at the node, as that is the machine they are physically attached too.

The observant will have noted NFS and SMB in the list. These are remote drives, from some form of NAS (Network Attached Storage) on your network. This is referred to as shared storage, since it is not located on a Node (machine) within the cluster.

There are also a few other file systems that basically aggregate various NAS’s. I currently have not tested this, so will not be covering them.

Node Storage

When you get down to the Node level, you will find that the Storage option has been replaced by a Disk option. Since a disk belongs to a Node (actual physical computer) we can work directly with it.

Available disks/partitions should show up when you go to Datacenter > Node > Disks. Even though they show up, they may or may not be usable. Two options at the top of the window are Wipe Disk and Initialize Disk with GPT. These can help prepare a disk for usage.

Note the various options under Datacenter > Node > Disk are whole disk operations. That is to say they must be performed on an entire initialized but blank disk.

Also note that some disk structures, like ZFS or LVM can span multiple drives, and or work wit or as a Raid  (Redundant Array of Inexpensive Drives) setups.

You should also note that there are a lot of storage types missing (not counting the Cluster type File Systems). For example BtrFS might require some command line intervention to create and setup. Other systems like Directory, ZFS, and LVM should work without command line intervention.

Once a storage drive is initialized or setup in the Node, it can be added to the DataCenter Cluster.

Virtual Machines and Containers

Dropping down another level we come to VM’s and containers. When creating a VM or Container, we segment part of our storage drives to contain the virtual storage drives. Note you can also create multiple virtual storage drives on different storage devices for the same VM.

At the VM layer, the Disks option has been replaced with Hardware. Under hardware all the resources allocated to the VM will be listed, including storage drives. Fro this location you can also define and add additional volumes.

Example – Virtualized Media Server

In this example I have setup Proxmox on a computer with a 250 GB drive. Aside from swap space and UEFI/boot drives, the root partition, aka locale, takes up about 80 GB bin the default install. This leaves the remainder, about 160 GB to be setup as locale-lvm, again by default.

I then install a 1TB SSD in the computer. Since it is attached to the computer we start at the Datacenter > Node > Disk level. I go ahead and initialize the drive and set it up as LVM-thin, giving it a name, SSD_1. Jumping back up to the Datacenter > Storage level, we make sure it is added as a LVM-Thin storage area.

Now when we create our VM, we get the default virtual drive setup in locale-lvm (probably around 30 GB on the 250 GB boot drive). And we can set up a second virtual drive taking part or all of the space on the newly created SSD_1 (in this case I take the whole drive giving the vm 1TB).

Now when I start my VM, it sees two drives (30GB and 1TB). The actual VM setup will be dependent on the Operating System being loaded. I will gnerly put the OS on the smaller drvie along with some applications (like Plex for example). The larger drive becomes the data drive, and I also share it out (SMB, FTP, NFS< etc depending on need).

Conclusion

This has been a high level look at storage. There are a lot of potions and possibilities we have not discussed yet. But we should have enough information to set up some basic VM’s and Containers, along with actually some limited understanding what we are doing.

As long as you keep the multi level model in mind and remember what level the drive is physically attached too, a lot of other instructions sets will start to make a lot moire sense.

Add New Comment

Your email address will not be published. Required fields are marked *