this post was submitted on 01 Jul 2023
33 points (100.0% liked)

Asklemmy

43945 readers
575 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

I already get rate-limited like crazy on lemmy and there are only like 60,000 users on my instance. Is each instance really just one server or are there multiple containers running across several hosts? I’m concerned that federation will mean an inconsistent user experience. Some instances many be beefy, others will be under resourced… so the average person might think Lemmy overall is slow or error-prone.

Reddit has millions of users. How the hell is this going to scale? Does anyone have any information about Lemmy’s DB and architecture?

I found this post about Reddit’s DB from 2012. Not sure if Lemmy has a similar approach to ensure speed and reliability as the user base and traffic grows.

https://kevin.burke.dev/kevin/reddits-database-has-two-tables/

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago

It definitely is part of the ActivityPub protocol, but I only glanced at the spec so far. I should probably follow that link and implement a toy ActivityPub app to get more familiar with how it works.

I think there’s a benefit to the push model, as the instances can prioritize who to push to first if there’s scaling issues, instead of having to throttle GETs,

The downside to this is smaller instances are penalized in that scenario, which would in turn could cause users to flock to megainstances until it becomes centralized again.

As I said, GETs are cacheable, so if one slaps Cloudflare in front you can handle millions of GETs for relatively cheap.

Maybe it's batched however? I really need to read the spec. Pushing to thousands of servers every 1/5/10 minutes certainly would give a fair amount of headroom to make it work I guess.