this post was submitted on 01 Jul 2023
3756 points (97.2% liked)

Lemmy.World Announcements

29079 readers
208 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world/

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to [email protected] e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
 

Looks like it works.

Edit still see some performance issues. Needs more troubleshooting

Update: Registrations re-opened We encountered a bug where people could not log in, see https://github.com/LemmyNet/lemmy/issues/3422#issuecomment-1616112264 . As a workaround we opened registrations.

Thanks

First of all, I would like to thank the Lemmy.world team and the 2 admins of other servers @[email protected] and @[email protected] for their help! We did some thorough troubleshooting to get this working!

The upgrade

The upgrade itself isn't too hard. Create a backup, and then change the image names in the docker-compose.yml and restart.

But, like the first 2 tries, after a few minutes the site started getting slow until it stopped responding. Then the troubleshooting started.

The solutions

What I had noticed previously, is that the lemmy container could reach around 1500% CPU usage, above that the site got slow. Which is weird, because the server has 64 threads, so 6400% should be the max. So we tried what @[email protected] had suggested before: we created extra lemmy containers to spread the load. (And extra lemmy-ui containers). And used nginx to load balance between them.

Et voilΓ . That seems to work.

Also, as suggested by him, we start the lemmy containers with the scheduler disabled, and have 1 extra lemmy running with the scheduler enabled, unused for other stuff.

There will be room for improvement, and probably new bugs, but we're very happy lemmy.world is now at 0.18.1-rc. This fixes a lot of bugs.

(page 5) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 year ago

This looks like it’s exposed some weaknesses, and it sounds like the team has it under control.

I’m moving right now and can’t dedicate time to development, but once I’m up and running I should be able to start contributing.

Scalability is always a concern with these sites, so there’s plenty to do to improve that

[–] [email protected] 4 points 1 year ago (1 children)

Awesome! Loading issues are still the bane of Lemmy's existence though, or at least it is for me and my experience with Lemmy. Everything just loads so slow. Sorting is still broken as well. Communities that I KNOW that are active just show as blank for me no matter what I sort by.

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago (1 children)

Is there a issue with the api? ( Because the api wrapper lemmy-js-client doesnt work on login. ) I tried it yesterday but not today yet. I will test it when i can :)

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago

Even using Tor, the site load-times seem a lot snappier. Exciting times.

[–] [email protected] 4 points 1 year ago

Special thanks to the other guys for helping make this update possible. Site is snappier than ever and the UI looks fantastic.

[–] [email protected] 4 points 1 year ago

I like it.
The site feels a lot better to me, and seems significantly gentler in terms of browser resource consumption.

[–] [email protected] 4 points 1 year ago

Thank you!!

[–] [email protected] 3 points 1 year ago

Nice work :)

[–] [email protected] 3 points 1 year ago

Just want to say thank you. Your hard work is very much appreciated.

[–] [email protected] 3 points 1 year ago (1 children)

obviously not critical, but it looks like there's a small sidebar bug (or feature?) that puts the pic near the instance name if it is the first thing in its description?

[–] [email protected] 3 points 1 year ago

I think that's a feature. But not 100% sure πŸ˜…
But honestly, I like the look. If it is a bug, it should become a feature 🀣

[–] [email protected] 3 points 1 year ago

Working well here and can use Jerboa again. Although wefwef is really growing on me!

Edit: couldn't post from Jerboa, got network error. But wefwef worked.

[–] [email protected] 3 points 1 year ago

Excellent work!

[–] [email protected] 3 points 1 year ago

Seems to work good enough!

[–] [email protected] 3 points 1 year ago

Let us know where donations can go, suspect a stacked docker-compose will reach limits very quickly

[–] [email protected] 3 points 1 year ago

Logging in works now! Also got 2FA enabled without issues.

[–] [email protected] 3 points 1 year ago

Amazing work! It seems much more performant now, everything seems to be loading faster.

[–] [email protected] 3 points 1 year ago

Let us know where donations can go, suspect a stacked docker-compose will reach limits very quickly

[–] [email protected] 3 points 1 year ago (4 children)

Tried to login but nothing happen except a "?" was added into the link. Tried delete data, cookie, etc but the probelm still persist. Comment from other instance

load more comments (4 replies)
[–] [email protected] 3 points 1 year ago

Thanks for all the time and work you put towards making this community better! It's really appreciated!

[–] [email protected] 3 points 1 year ago

Thanks for all the time and work you put towards making this community better! It's really appreciated!

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

It seems that I can't log out in my browser. The page simply reloads after clicking the button.

[–] [email protected] 3 points 1 year ago

πŸ™Œ Great work team!!!

[–] [email protected] 3 points 1 year ago (1 children)

we created extra lemmy containers to spread the load. (And extra lemmy-ui containers).

Is Rust HTTP server running into thread limits? database connection pooling? All kinds of internal questions bout that solution.

[–] [email protected] 3 points 1 year ago

I don't know rust. But there were 150 database connections setup by lemmy, but only about 15 of them were used, the rest idle.

[–] [email protected] 3 points 1 year ago

This is awesome. Was a fun read too. Super cool to see what was going on behind the scenes.

load more comments
view more: β€Ή prev next β€Ί