this post was submitted on 30 Jul 2024
159 points (92.5% liked)

Selfhosted

40767 readers
1629 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I saw this post today on Reddit and was curious to see if views are similar here as they are there.

  1. What are the best benefits of self-hosting?
  2. What do you wish you would have known as a beginner starting out?
  3. What resources do you know of to help a non-computer-scientist/engineer get started in self-hosting?
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 62 points 5 months ago (14 children)

The big thing for #2 would be to seperate out what you actually need vs what people keep recommending.

General guidance is useful, but there's a lot of 'You need ZFS!' and 'You should use K8s!' and 'Use X software!'

My life got immensely easier when I figured out I did not need any features ZFS brought to the table, and I did not need any of the features K8s brought to the table, and that less is absolutely more. I ended up doing MergerFS with a proper offsite backup method because, well, it's shockingly low-complexity.

And I ended up doing Docker with a bunch of compose files and bind mounts, because it's shockingly low-complexity. And it's just running on Debian, instead of some OS that has a couple of layers of additional software to make things "easier" because, again, it's low-complexity.

I can re-deploy the entire stack on new hardware in about ~10 minutes (I've tested this a few times just to make sure my backup scripts work), and there's basically zero vendor tie-in or dependencies that you'd have to get working first since it's just a pile of tarballs and packages from the distro's package manager on, well, ANY distro.

[–] [email protected] 4 points 5 months ago (1 children)

I have made that migration myself going from a Raspberry PI 4 to a n100 based NAS. It was 10 minutes for the software stack as you said This not taking into account media migration which was done on the background over a few hours on WiFi (I had everything on an external hard drive at the time).

That last part is the only thing I would change about my self hosting solution. Yes, the NAS has a nice form factor, is power efficient and has so far been very optimal for my needs (no lag like rpi4), however I have seen they don’t really sell motherboard or parts to repair them. They want you to replace it with another one. Reason 2 on the same is vendor lock in. Depending on the options you select when creating the storage groups/pools (whatever they are called), you could be stuck needing to get something from the same vendor to read your data if the device stops working but the disks are salvageable. Reason 3 is they’ve had security incidents so a lot of the “features” I would not recommend using ever to avoid exposing your data to ransomware over the internet. I don’t trust their competitors either. I know how commercial software is made with the smallest amount of care for security best practices.

[–] [email protected] 3 points 5 months ago (1 children)

Yeah, I just use plain boring desktop hardware. (Oh no! I'm experiencing data corruption due to the lack of ECC!) It's cheap, it's available, it's trivial to upgrade and expand, and there's very few little gotchas in there: you get pretty much exactly what it looks like you get.

Also nice is that that you can have a Ship of Theseus NAS by upgrading what needs upgrading as you go along and aren't tied into entire platform swaps unless it makes sense - my last big rebuild was 3 years ago, but this is basically a 10 year old NAS at this point.

load more comments (1 replies)
load more comments (12 replies)
[–] [email protected] 53 points 5 months ago (2 children)
  • you do not need kubernetes
  • you do not need anything to be „high availability”, that just adds a ton of complexity for no benefit. Nobody will die or go broke if your homelab is down for a few days.
  • tailscale is awesome
  • docker-compose is awesome
  • irreplaceable data gets one offsite backup, one local backup, and ideally one normally offline backup (in case you get ransomwared)
  • yubikeys are cool and surprisingly easy to use
  • don’t offer your services to other people until you are sure you can support it, your backups are squared away, and you are happy with how things are set up.
[–] [email protected] 20 points 5 months ago* (last edited 5 months ago) (18 children)

To piggy back on your “You don’t need k8s or high availability”,

If you want to optimize your setup in a way that’s actually beneficial on the small, self hosted scale, then what you should aim for is reproducibility. Docker compose, Ansible, NixOS, whatever your pleasure. The ability to quickly take your entire environment from one box and move it to another, either because you’re switching cloud providers or got a nicer hardware box from a garage sale.

When Linode was acquired by Akamai and subsequently renamed, I moved all my cloud containers to Vultr by rsyncing the folder structure to the new VM over SSH, then running the compose file on the new server. The entire migration short of changing DNS records took like 5 minutes of hands-on time.

load more comments (18 replies)
[–] [email protected] 3 points 5 months ago (3 children)

Not needing Kubernetes is a broad statement. It allows for better management of storage and literally gives you a configurable reverse-proxy configured with YAML if you know what you're doing.

[–] [email protected] 9 points 5 months ago (3 children)

Yes, but you don't need Kubernetes from the start.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 46 points 5 months ago (1 children)

I wish I knew not to trust closed source self-hosted applications, such as Plex. Would have saved a lot of time and money.

[–] [email protected] 8 points 5 months ago* (last edited 5 months ago) (1 children)
[–] [email protected] 36 points 5 months ago* (last edited 5 months ago) (2 children)

Plex is a great example here. I've been Hetzner customer for many many years, and bought a lifetime license to Plex. Only to receive few months later a notification from Plex that I am no longer allowed to self-host Plex for myself(and only myself) at Hetzner and that they will block all access to my self-hosted Plex instance. I tried to ask for leniency or a refund, but that was wasted effort as well.

In short, I was caught on a crossfire when for-profit company tried to please hollywood by attempting to reduce piracy, so they could get new VC funding.

...

I am now a happy Jellyfin user and warmly recommend all Plex users to try it, the Jellyfin community is awesome!

(Use your favourite search engine to look up "Hetzner Plex ban" for more details)

[–] [email protected] 9 points 5 months ago (1 children)

@zutto @warlaan Searching about, this was Plex banning the use of Plex on Hetzner's IP block, right? Not a decision made by Hetzner?

[–] [email protected] 11 points 5 months ago* (last edited 5 months ago)

Yes, correct.

I apologize if someone misunderstood my reply, Plex was the bad actor here.

[–] [email protected] 4 points 5 months ago (1 children)

Are you still on Hetzner? How's their customer support in general?

[–] [email protected] 5 points 5 months ago

Still with Hetzner yeah. Haven't had to deal with Hetzner customer support in the recent years at all, but they have been great in the past.

[–] [email protected] 36 points 5 months ago

It is much easier to buy one "hefty" physical machine and run ProxMox with virtual machines for servers than it is to run multiple Raspberry Pis. After living that life for years, I'm a ProxMox shill now. Backups are important (read the other comments), and ProxMox makes backup/restore easy. Because eventually you will fuck a server up beyond repair, you will lose data, and you will feel terrible about it. Learn from my mistakes.

[–] [email protected] 31 points 5 months ago (1 children)

My reason for self hosting is being in control of my shit, and not the cloud provider.

I run jellyfin, soulseek, freshRSS, audiobookshelf and nextcloud. All of that on a pi 4 with an SSD attached and then accessible via wireguard. Also that sad is accessible as nfs share.

As I had already known Linux very well before I've started my own cloud, I didn't really had to learn much.

The biggest resource I could recommend is that GitHub repository where a huge amount of awesomely selfhosted solutions are linked.

[–] [email protected] 28 points 5 months ago (1 children)
[–] [email protected] 4 points 5 months ago

Yes that one, thanks.

[–] [email protected] 17 points 5 months ago

I'll parrot the top reply from Reddit on that one: to me, self hosting starts as a learning journey. There's no right or wrong way, if anything I intentionally do whacky weird things to test the limits of my knowledge. The mistakes and troubles are when you learn. You don't really understand the significance of good backups until you had to restore from them.

Even in production, it differs wildly. I have customers whom I set up a bare metal Ubuntu in some datacenter for cheap, they've been running on that setup for 10 years. Small mom and pop shop, they will never need a whole cluster of machines. Then at my day job we're looking at things like Kubernetes and very heavyweight stacks because we handle a lot of traffic.

Some people self-host a PiHole on a Raspberry Pi and that's all they need. Some people have entire NAS setups with smart TVs accessing their Plex/Jellyfin servers for the whole extended family. I host my own emails, which is a pain in the ass to get working reliably and clean your IP reputation.

I guess the only thing you should know is, you need some time to commit to maintaining your stuff if you don't want it to break or get breached (if exposed to the Internet), and a willingness to learn because self hosting isn't a turnkey experience. It can be a turnkey installation but when your SD card/drives fails you're still on your own to troubleshoot and fix it. You don't set a NextCloud server to replace Google Drive with the expectation that you shove the server in a closet forever. Owning your infrastructure and data comes at a small but very important upkeep time investment.

[–] [email protected] 14 points 5 months ago* (last edited 5 months ago) (2 children)

Benefits:

  • Cheap storage that I can use both locally and as a private cloud. Very convenient for ~~piracy~~ storing all my legally obtained files.

  • Network wide adblocking. Massive for mobile games/apps.

  • Pivate VPN. Really useful for using public networks and bypassing network restrictions.

  • Gives me an excuse to buy really cool, old server and networking hardware.

As for things I wish I knew... Don't use windows for servers. Just don't.

SMB sucks, try NFS.

Use docker, managing 5 or 10 different apps without containers is a nightmare.

Bold of you to assume I'm a computer scientist or engineer or that I have a degree lmao. I just hate ads, subscriptions and network restrictions, so I learned how to avoid those things. As for resources to get started... Look up TrueNAS scale. It basically does all of the work for you.

load more comments (2 replies)
[–] [email protected] 11 points 5 months ago* (last edited 5 months ago) (1 children)
    • Learning. If you ever found yourself tired of learning new things, your life is basically done.
    • Cost. You already have an internet connection at home. It's practically a necessity these days. The connection is likely fast enough for most things. Renting even the most piddly of VPS is wildly expensive. Just throw a spare machine at it and go wild.
    • Freedom. Your own data is constantly being collected, regurgitated, and sold back to you. More people need to care about this incessant invasion of our lives.
    • Backups. 3 copies, on different forms of storage, in multiple PHYSICALLY distinct locations. Just when you have that teeny little imp in the back of your mind say "hmm, I should probably back up soon" -- stop everything you're doing and run a backup.
    • Test your recovery! Backups are only good if you can recover from them. Many have lost data because they failed to ever fail-test their backups.
    • Google. Legitimately the best skill you can ever attain is simply being able to search effectively and be able to learn jargon quickly. Once you have the lingo down, searches become clearer, quicker, more precise.
load more comments (1 replies)
[–] [email protected] 8 points 5 months ago (2 children)
  1. I've learned a number of tools I'd never used before, and refreshed my skills from when I used to be a sysadmin back in college. I can also do things other people don't loudly recommend, but fit my style (Proxmox + Puppet for VMs), which is nice. If you have the right skills, it's arbitrarily flexible.

  2. What electricity costs in my area. $0.32/KWh at the wrong time of day. Pricier hardware could have saved me money in the long run. Bigger drives could also mean fewer, and thus less power consumption.

  3. Google, selfhosting communities like this one, and tutorial-oriented YouTubers like NetworkChuck. Get ideas from people, learn enough to make it happen, then tweak it so you understand it. Repeat, and you'll eventually know a lot.

load more comments (2 replies)
[–] [email protected] 7 points 5 months ago (1 children)
  1. less is more, it's fine to sunset stuff you don't use enough to afford them using cpu cycles, memory and power
  2. search warrants are a real thing and you should not trust others to use your infrastructure responsibly because you will be the one paying for it if they don't.
[–] [email protected] 4 points 5 months ago (1 children)

Is there a story attached to no. 2?

[–] [email protected] 3 points 5 months ago (1 children)

Well, turns out that when you host a private service that allows others to share files, they might share files that they are not allowed to share. But in return your door gets kicked in in the morning and suddenly no one wants to take credit for the actual upload anymore.

load more comments (1 replies)
[–] [email protected] 7 points 5 months ago (1 children)
  1. data stays local for the most part. Every file you send to the cloud becomes property of the cloud. Yeah, you get access, but so does the hosting provider, their 3rd party resources, and typical government compliances. Hard drives are cheap and fast enough.

  2. not quite answering this right, but I very much enjoy learning and evolving. But technology changes and sometimes implementing new software like caddy/traefik on existing setups is a PITA! I suppose if I went back in time, I would tell myself to do it the hard way and save a headache later. I wouldn't have listened to me though.

  3. Portainer is so nice, but has quirks. It's no replacement for the command line, but wow, does it save time. The console is nerdy, but when time is on the line, find a good GUI.

load more comments (1 replies)
[–] [email protected] 7 points 4 months ago (1 children)

Podman quadlets have been a blessing. They basically let you manage containers as if they were simple services. You just plop a container unit file in /etc/containers/systemd/, daemon-reload and presto, you've got a service that other containers or services can depend on.

[–] [email protected] 3 points 4 months ago (1 children)

Is containers here used in the same context as docker? I'm not familiar with podman.

[–] [email protected] 3 points 4 months ago

Just about but it's more experimental.

[–] [email protected] 6 points 5 months ago (1 children)

For 2.: use dns-01 challenge to generate wildcard SSL certs. Saves so much time and nerves.

load more comments (1 replies)
[–] [email protected] 6 points 5 months ago (1 children)

I would've wished

  • don't rush things into production.
  • dont offer a service to a friend without really knowing and having the experience to keep it up when needed.
  • dont make it your life. The services are there to help you, not to be your life.
  • use docker. Podman is not yet ready for mainstream, in my experience. When the services move to podman officially it's time to move. Just because jellyfin offers official documentation for it, doesn't mean it'll work with podman (my experience)
  • just test all services with the base docker install. If something isn't working, there may be a bug or two. Report if it is a bug. Hunt a bug down if you can. maybe it's just something that isn't documented (well enough) for a beginner.
  • start on your own machine before getting a server. A pi is enough for lightweight stuff but probably not for a fast and smooth experience with e.g. nextcloud.
  • backup.
  • search for help. If not available in a forum. ask for help. Dont waste many many hours if something isnt working. But research it first and read the documentation.
[–] [email protected] 10 points 5 months ago* (last edited 5 months ago)

Podman is not yet ready for mainstream, in my experience

My experience varies wildly from yours, so please don't take this bit as gospel.

Have yet to find a container that doesn't work perfectly well in podman. The options may not be the same. Most issues I've found with running containers boil down to things that would be equally a problem in docker. A sample:

  • "rootless" containers are hard to configure. It can almost always be fixed with "--privileged" or some combination of permission flags. This would be equally true for docker; the only meaningful difference is podman tries to push everything into rootless. You don't have to.
  • network filesystems cause headaches, especially smbfs + sqlite app. I've had to use NFS or ext4 inside a network-mounted image for some apps. This problem is identical for docker.
  • container networking--for specific cases--needs to managed carefully. These cases are identical for docker.

And that's it. I generally run things once from the podman command line, then use podlet to create a quadlet out of that configuration, something you can't do with docker. If you are having any trouble with running containers under podman, try the --privileged shortcut, see that it works, and then double back if you think you really need rootless.

[–] [email protected] 6 points 5 months ago (2 children)

For #2 and #3, it’s probably exceedingly obvious, but wish I would have truly understood ssh, remote VS Code, and enough git to put my configs on a git server.

So much easier to manage things now that I’m not trying to edit docker compose files with nano and hoping and praying I find the issue when I mess something up.

load more comments (2 replies)
[–] [email protected] 6 points 5 months ago
  1. Our internet goes out periodically, so having everything local is really nice. I set up DNS on my router, so my TLS certs work fine without hitting the internet.
  2. I wish someone would've taught me how to rip blurays. It wasn't a big deal, but everything online made it sound super sketchy flashing firmware onto a Bluray drive.
  3. I'm honestly not sure. I'm in CS and am really into Linux, so I honestly don't know what would be helpful. I guess start small and get one thing working at a time. There's a ton of resources online for all kinds of skill levels, and as long as you do one thing at a time, you should eventually see success.
[–] [email protected] 4 points 5 months ago* (last edited 5 months ago)

For me #2 would be "you have ADHD and won't be able to be medicated so just don't"

I've mentioned elsewhere my server upgrade project took longer than expected.

Just last night I threw it all into the trash because I just can't anymore

load more comments
view more: next ›