this post was submitted on 23 Jun 2023
7 points (100.0% liked)
Lemmy Server Performance
420 readers
1 users here now
Lemmy Server Performance
lemmy_server uses the Diesel ORM that automatically generates SQL statements. There are serious performance problems in June and July 2023 preventing Lemmy from scaling. Topics include caching, PostgreSQL extensions for troubleshooting, Client/Server Code/SQL Data/server operator apps/sever operator API (performance and storage monitoring), etc.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm assuming Federation backdoor, API for incoming server to server transactions, doesn't have a rate limit? But I haven't validated that assumption.
Something, from what I've seen so far, Lemmy has no application specific logging and just dumps everything into the system log. I really think operators need some concept of how many signups, logins, post, comments, communities they are getting per hour/day/week - which external websites are out there publishing number of communities and users.
Maybe @[email protected] can shed some light on this. Not sure if the find time, but it would be great to get some more input.
The docker-compose.yml file in the
docker
folder has a config for postgres logging. We use it to diagnose performance issues in prod. There's a DB tag on the lemmy issue tracker, I suggest using that to track performance issues.The problem is we need to get some data out of the big sites, Beehaw, Lemmy.ml, Lemmy.world - so that we can see what it is like having far more comments, likes, federation activity, and interactive user loads.
Is someone with a big instance willing to publish their logs?
These log settings are not very good for production. The DB would spend more time logging, then working.
But: We can simulate this kinds of load with local toolings. My first tests look quiet promising, if i would only find the bug in my old pg-exporter i wrote for mal last employer ;) (yes, it is open source).