this post was submitted on 06 Jul 2023
17 points (94.7% liked)

Selfhosted

39159 readers
379 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

So I run a small Kubernetes cluster (k3s) backed by MariaDB hosted on a Synology NAS with only HDDs rather than etcd colocated on the control nodes. For resiliency purposes it's been great, nodes are basically pure compute resources I can wipe out and recreate with ease and not worry about data loss. However, for over a year now I've lived with the constant chatter of active hard drives in my office.

The Kube DB workload is extremely read heavy and very active: many thousands of selects per minutes with only a handful of writes. Clickclickclickclickclickclick. Seems like a good case for caching, and luckily my NAS has 2 NVMe slots for an SSD cache. I bought a couple data center drives with PLP (Kingston DC1000B, probably overkill, but not crazy expensive), pop them in, set up a read/write cache for the database and Kube NFS volumes and...silence, wonderful silence. It's almost constantly at 100% cache hits. Bonus points if things are faster as well.

I'm very happy. Never optimized an application for noise levels before 😁.

top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 year ago

Haha I actually just did the same thing yesterday! I run RKE2 in a seagate hdd partition and was tolerating the noise. The seagate hdd was louder than any previous hdd I had, and yesterday I couldn't stand it anymore and move the data into a new ssd partition, then remount it in /var/lib/rancher.

Such bliss! Should've done it right from the start.

[–] [email protected] 3 points 1 year ago (1 children)

Make sure your drive firmware is updated. A surprising amount of SSDs have firmware bugs that cause premature wearout.

Personally, if I could fit the cache in RAM, I'd just do that.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I considered it but RAM is very limited on the NAS and the cluster nodes, it's my primary bottleneck. it would also be more volitile. the two SSDs are RAID 1 redundant, just like the underlying HDDs, in addition to the built in power loss protection on the drives. RAM discs are great if you can spare them and have a UPS though.

load more comments
view more: next β€Ί