70
submitted 1 month ago by [email protected] to c/[email protected]

This is just to followup from my prior post on latencies increasing with increasing uptime (see here).

There was a recent update to lemmy.ml (to 0.19.4-rc.2) ... and everything is so much snappier. AFAICT, there isn't any obvious reason for this in the update itself(?) ... so it'd be a good bet that there's some memory leak or something that slows down some of the actions over time.

Also ... interesting update ... I didn't pick up that there'd be some web-UI additions and they seem nice!

top 17 comments
sorted by: hot top controversial new old
[-] [email protected] 26 points 1 month ago

There were optimizations related to database triggers, these are probably responsible for the speedup.

https://github.com/LemmyNet/lemmy/pull/4696

[-] [email protected] 22 points 1 month ago

For the moment at least. Whatever problem we had before, it seemed to get worse over time, eventually requiring a restart. So we’ll have to wait and see.

[-] [email protected] 10 points 1 month ago

Well, I've been on this instance through a few updates now (since Jan 2023) and my impression is that it's a pretty regular pattern (IE, certain APIs like that for replying to a post/comment or even posting have increasing latencies as uptime goes up).

[-] [email protected] 1 points 1 month ago

Sounds exactly like the problem I fixed and mostly caused

https://github.com/LemmyNet/lemmy/pull/4696

[-] [email protected] 1 points 1 month ago

Nice! Also nice to see some SQL wizardry get involved with lemmy!

[-] [email protected] 5 points 1 month ago

My server seems to get slower until requiring a restart every few days, hoping this provides a fix for me too 🤞

[-] [email protected] 5 points 1 month ago

Try switching to Postresql 16.2 or later.

[-] [email protected] 3 points 1 month ago
[-] [email protected] 3 points 1 month ago

Nothing particular, but there was a strange bug in previous versions that in combination with Lemmy caused a small memory leak.

[-] [email protected] 1 points 1 month ago

In my case it’s lemmy itself that needs to be restarted, not the database server, is this the same bug you’re referring to?

[-] [email protected] 1 points 1 month ago

Yes, restarting Lemmy somehow resets the memory use of the database as well.

[-] [email protected] 1 points 1 month ago

Hm, weird bug. Thanks for the heads up ❤️ I’ve been using the official ansible setup but might be time to switch away from it

[-] [email protected] 4 points 1 month ago

Reddthat has 0.19.4 too, feels indeed snappier

[-] [email protected] 2 points 1 month ago

Interesting. It could be for the same reason I suggest for lemmy.ml though. Do you notice latencies getting longer over time?

[-] [email protected] 3 points 1 month ago

It's a smaller server so I guess latency issues would appear at a slower pace than lemmy.ml

[-] [email protected] 2 points 1 month ago

makes sense ... but still ... you're noticing a difference. Maybe a "boiling frog" situation?

[-] [email protected] 2 points 1 month ago

I would say it still feels snappier today than before the update (a couple weeks ago?), so definitely an improvement

this post was submitted on 28 May 2024
70 points (92.7% liked)

Lemmy

12306 readers
1 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to [email protected].

founded 4 years ago
MODERATORS