this post was submitted on 17 Jul 2023
23 points (92.6% liked)

Selfhosted

40296 readers
539 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

First off, I know ultimately I'm the only person who can decide if it's worth it. But I was hoping for some input from your collective experience.

I have a server I built currently running Ubuntu 22.04. I'm using KVM/qemu to host VMs and have recently started exploring the exciting world of Docker, with a VM dedicated to Portainer. I manage the VMs with a mix of virt-manager via xRDP, cli tools, and (if I'm feeling extra lazy) Cockpit. Disks are spindles currently in software Raid 10 (md), and I use LVM to assign volumes to the KVM VMs. Backups are via a script I wrote to snapshot the LVM volume and back it up to B2 via restic.

It all works. Rather smoothly except when it doesn't πŸ˜€.

I've been planning an HD upgrade and was considering using that as an excuse to start over. My thoughts are to either install Debian and continue with my status quo, or to give Proxmox a try. I've been reading alot of positive comments about it here and I have longed for one unified web interface to manage my VMs.

My main concerns are:

  1. Backups. I want to be able to backup to B2 but from what I've read I don't see a way to do that. I don't mean backup to a local repository and then sync that to B2. I'm talking direct to B2.
  2. Performance. People rave about ZFS, but I have no experience. Will I get at least equivalent performance out of ZFS and how much RAM will that cost me? Do I even need ZFS or can I just continue to store VMs the way I do today?

Having never used Proxmox to compare I'm really on the fence about this one. I'd appreciate any input. Thanks.

top 28 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 1 year ago (1 children)

I'll add my voice to the chorus and recommend Proxmox. I've never tried xcp-ng; it looks nice and I'm interested, but Proxmox has worked well for me.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

I did a little research (on xcp-ng) since reading @[email protected]'s post. Seems like it has a lot going for it. My main concern, right now, is that it's built on top of CentOS.

[–] [email protected] 1 points 1 year ago (1 children)

You've gotten incorrect information on that front. Proxmox is actually built on top of Debian.

[–] [email protected] 5 points 1 year ago

No. I just forgot to put xcp-ng anywhere in my reply to you. πŸ˜€

[–] [email protected] 8 points 1 year ago (1 children)

While you're in your planning stage, I would advocate for Proxmox. I really like it. Another contender would be xcp-ng.

[–] [email protected] 2 points 1 year ago

xcp-ng

Not gonna lie, I haven't looked at Xen in years. xcp-ng looks interesting. I'll have to dig into that more.

[–] [email protected] 6 points 1 year ago (1 children)

Another vote for Proxmox.

Backups: Proxmox Backup Server (yes, it can run in a Proxmox VM) is pretty great. You can use something like Duplicati to backup the PBS datastore to B2.

Performance: You can use ZFS in Proxmox, or not. ZFS gets you things like snapshots and raidz, but you will want to make sure you have a good amount of RAM available and that you leave about 20% of available drive space free. This is a good resource on ZFS in Proxmox.

Performance-wise, I have clusters with drives running ZFS and EXT4, and I haven't really noticed much of a difference. But I'm also running low-powered SFF servers, so I'm not doing anything that requires a lot of heavy duty work.

[–] [email protected] 1 points 1 year ago (1 children)

Does Proxmox still sit at the top of the stack if I'm not clustering?

[–] [email protected] 1 points 1 year ago

I would say it's at the "bottom" of the stack - Debian is the base layer, then Proxmox, then your VMs.

Clustering just lets the different nodes share resources (more options with ZFS) and allows management of all nodes in the cluster from the same GUI.

[–] [email protected] 4 points 1 year ago (1 children)

Proxmox wont make backups to B2 easier, but since it is basically a web interface and API for Debian and KVM/QEMU you might be able to use your current backup strategy with very little modification.

As for ZFS, you can expect to use about a GB of RAM for each TB in a ZFS pool. I (only) run 2x 4TB drives in ZFS mirror and it results in about 4-5 GB of RAM overhead.

Another point you might want to consider is automation and the ability to use infrastructure as code. You can use the Proxmox Packer builder and Terraform provider to automate building machine images and cloning virtual machines. If you're into the learning experience it's definitely a consideration. I went from backing up entire VM disks to backing up only application data, making it faster and cheaper. It also enabled a lot of automated testing. For a homelab it's a bit much, the learning experience is the biggest part. It's an entire rabbit hole.

If you want to see how the automation looks like, check out my example infrastructure repo and the matching tutorial. Also check out my Alpine machine image repo which includes automated tests for image cloning, disk resizing and a CI pipeline.

[–] [email protected] 1 points 1 year ago (2 children)

Proxmox wont make backups to B2 easier, but since it is basically a web interface and API for Debian and KVM/QEMU you might be able to use your current backup strategy with very little modification.

I found this which leads me to believe I may be able to pipe zfs send to restic to replicate my current disk backup strategy. Presumably I could fire up a VM and build a zfs storage pool in it to test that theory out.

As for ZFS, you can expect to use about a GB of RAM for each TB in a ZFS pool. I (only) run 2x 4TB drives in ZFS mirror and it results in about 4-5 GB of RAM overhead.

So if I were to put 4x4TB in a RAID10 equivalent pool I'd be looking at ~ 8GB not 16, whew.

For a homelab it’s a bit much, the learning experience is the biggest part. It’s an entire rabbit hole.

The rabbit hole is where all the fun is. Templating was something I never really got around to in my current setup. I do have an ansible playbook and set of roles that will take a brand new Ubuntu VM and configure it just how I like it.

Thanks for all the info. I'll be sure to check out your repo.

[–] [email protected] 2 points 1 year ago

My zfs cache for 6x 4tb drives in raidz2 is about 10gb of ram.

[–] [email protected] 1 points 1 year ago

I found this which leads me to believe I may be able to pipe zfs send to restic to replicate my current disk backup strategy. Presumably I could fire up a VM and build a zfs storage pool in it to test that theory out.

Replying to myself but I think this is a square peg, round hole, situation.

If I'm starting over with proxmox I likely need to rethink my entire backup strategy.

[–] [email protected] 4 points 1 year ago (2 children)

I run Proxmox in a cluster and TrueNAS in a VM on one of the nodes. It's been really convenient. My nodes run a mix of LXC containers for different things + Docker or regular VM's for other software.

[–] [email protected] 1 points 1 year ago

That was one of the reasons I was thinking of getting bigger disks. I want to retire the qnap I have and spin up a TrueNAS VM.

[–] [email protected] 1 points 1 year ago (1 children)

How are you passing the drives to the TrueNAS VM?

[–] [email protected] 1 points 1 year ago

I haven't done it myself, but I have looked into the process in the past. I believe you do it just like paying any drive through to any Proxmox VM.

It's fairly simple - you can either pass the entire drive itself through to the VM, or if you have a controller card the drive is attached to, you can pass that entire PCIe device through to the VM and the drive will just "come with it".

[–] [email protected] 3 points 1 year ago

Please give Proxmox a try! It was such a huge quality of life improvement when I migrated to it. I can’t speak to your backup needs or to the performance of ZFS, since I don’t use either of those. I just think that Proxmox took a lot of the pain out of my homelab management experience without taking away my capabilities to customize it. Highly recommend!

[–] [email protected] 2 points 1 year ago (1 children)

I started with proxmox and I'll continue to use it because it's very nice to use. As backup I use an rclone mount that is shared via NFS (everything inside a container) and I set that NFS share as a backup storage in proxmox. I think it is a bit convoluted but works fine enough for now.

[–] [email protected] 2 points 1 year ago

Convoluted just means you built it with care. ❀️

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (1 children)

You most likely don't need Proxmox and its pseudo-open-source bullshit. My suggestion is to simply with with Debian 12 + LXD/LXC, it runs VMs and containers very well.

[–] [email protected] 7 points 1 year ago (2 children)

pseudo-open-source bullshit

What do you mean by this?

[–] [email protected] 5 points 1 year ago (1 children)

As far as I'm aware, everything in Proxmox is open source.

I think some people get annoyed by the Red Hat style paid support model, though. There is a separate repo for paying customers, but the non-subscription repo is just fine, and the official forums are a great place to get support, including from Proxmox employees.

[–] [email protected] 4 points 1 year ago

Gotcha. So long as they're not breaking GPL or holding back security updates for non-paying users. I could care less. Thanks.

[–] [email protected] 2 points 1 year ago

As said, they've separate repositories, annoying messages asking you for a license all the time etc. At some point you'll find out that their solution doesn't offer anything particular of value that you can't get with other less company dependent solutions like I described before. You may explore the LXD native GUI... or heck even Cockpit or Webmin might be decent options.

[–] [email protected] 0 points 1 year ago (2 children)

I just swapped from Ubuntu to Debian but I don’t use VMs - only containers. I back my files up directly to B2 using autorestic, also running in a container that is scheduled by… another container (chadburn).

No need for any VMs in my house. I honestly can’t see the point of them when containers exist.

[–] [email protected] 1 points 1 year ago

Just an FYI to OP: If you're looking to run docker containers, you should know that Proxmox specifically does NOT support running docker in an LXC, as there is a very good chance that stuff will break when you upgrade. You should really only run docker containers in VMs with Proxmox.

Proxmox Staff:

Just for completeness sake - We don't recommend running docker inside of a container (precisely because it causes issues upon upgrades of Kernel, LXC, Storage packages) - I would install docker inside of a Qemu VM as this has fewer interaction with the host system and is known to run far more stable.

[–] [email protected] 1 points 1 year ago

Eh, to each their own. In fairness, some iteration of my current setup has existed for many years and I've only just get my feet wet with containers in the last month.

load more comments
view more: next β€Ί