this post was submitted on 22 Dec 2024
41 points (100.0% liked)

Selfhosted

40717 readers
394 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Currently I'm running some services though Docker on a Proxmox VM. Before I had Proxmox, I thought containers were a very clean way of organizing my system. I'm currently wondering if I can just install the services I always use on the VM directly. What are the pros and cons of that?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 31 points 2 days ago* (last edited 2 days ago) (2 children)

Containers are just processes with flags. Those flags isolate the process's filesystem, memory [1], etc.

The advantages of containers is that the software dependencies can be unique per container and not conflict with others. There are no significant disadvantages.

Without containers, if software A has the same dependency as software B but need different versions of that dependency, you'll have issues.

[1] These all depend on how the containers are configured. These are not hard isolation but better than just running on the bare OS.

[–] [email protected] 1 points 2 days ago* (last edited 2 days ago) (1 children)

Thanks for this - the one advantage I'm noticing is that to update the services I'm running, I have to rebuild the container. I can't really just update from the UI if an update is available. I can do it, it is just somewhat of a nuisance.

How often are there issues with dependencies? Is that a problem with a lot of software these days?

[–] [email protected] 4 points 2 days ago* (last edited 2 days ago) (2 children)

But rebuilding your container is pretty trivial from the command line all said and done. I have something like this alias'd in my .bashrc to smooth it along:

Docker compose pull; docker compose down; docker compose up -d

I regularly check on my systems and go through my docker dirs and run my alias to update everything fairly simply. Add in periodic schedule image cleanups and it has been humming along for a couple years for the most part (aside from one odd software issues and hardware failures).

How often are there issues with dependencies? Is that a problem with a lot of software these days?

I started using docker 3-4 years ago specifically because I kept having issues with dependencies of one app breaking others, but I also tend to run a lot of services per VM. Honestly, the overhead of container management is infinitely preferable to the overhead that comes with managing OS level stuff. But I'm also not a Linux expert, so take that for what you will.

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago) (1 children)

Is there a specific reason you're taking the services down before bringing them back up? Just docker compose pull && docker compose up -d recreates all services that had a new image pulled, but leaves the others running.

[–] [email protected] 3 points 1 day ago

Probably just a hold over from when I was first learning. Had issues with a couple services not actually updating without it, so I just do it to be absolutely sure. Also, I only ever run one app per compose, so that forces a "reboot" of the whole stack when I update.

[–] [email protected] 1 points 1 day ago (2 children)

I know rebuilding containers is trivial, but updating a service in the UI is more trivial than that. I'm just trying to make my life as trivial as possible 😁. It seems like containers may be worth the little bit of extra effort.

[–] [email protected] 4 points 1 day ago

I mean, for anything where you're willing to trust the container provider not to push breaking changes, you can just run Watchtower and have it automatically update. That's how most of my stuff runs.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

If you're not using some sort of automatic updates, you're not too seriously trying to make your life as trivial as possible. 😂 Just use fixed major version tags where possible in order to avoid surprise breakage.

[–] [email protected] 0 points 2 days ago

I beg to disagree about the disadvantages. An important one is that you cannot easily update shared libraries globally. This is a problem with things like libssl or similar. Another disadvantage is the added complexity both wrt. to operation but also in general the amount of code running. It can also be problematic that many people just run containers without doing any auditing. In general containers are pretty opaque compared to os packaged software which is usually compiled individually for the os.

This being said, systemd offers a lot of isolation features that allows similar isolation to containers but without having to deal with docker.