this post was submitted on 29 Jul 2023
153 points (94.2% liked)

Fediverse

17788 readers
3 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 
(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 year ago (2 children)

Where is this screenshot from? Link me plz :)

load more comments (2 replies)
[–] [email protected] 2 points 1 year ago (3 children)

when I start writing this comment, the post is 47 minutes old. if I understand the linked page properly, lemmy.world has been functional (all green checkmarks) for the past 10 minutes which is the furthest back the data goes. All the other instances are all green except for lemmy.one which is all red. I am assuming that 47 minutes ago, lemmy.world had red boxes?

Maybe a different link would have explained the point better but I don't really see how a 30 minute (??) server outage during an upgrade is compelling to avoid a large instance. Are you suggesting it's better to use a server whos admins don't upgrade? If not, is there really any size of server that would meaningfully avoid this kind of occasional disruption? Seems to me that the dynamism of the environment will inevitably lead to various problems. That's part of the experience. TBH threadiverse uptime on the whole is pretty impressive for such a ragtag groups of admins and devs.

I have accounts on some smaller servers but they have their drawbacks too. Using a bigger server is more convenient because the people and content is already there. It's easier. I didn't plan to use lemmy.world but I ended up making account there to use sometimes.

I think in a year or so the situation might be different. I see the ideological point and I would like it to be true. Maybe the technology will catch up. I think it would be nice to be able to programmatically seed content, but maybe that would be obnoxious to admins.

load more comments (3 replies)
[–] [email protected] 2 points 1 year ago (1 children)

Has anyone any idea what's going on with lemmy.one? It's been down for quite a while now

[–] [email protected] 2 points 1 year ago

Probably some hiccup in upgrading to 18.3

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (8 children)

Lemmy's machine-generated ORM SQL and hand-made flawed PostgreSQL TRIGGER logic is so bad, bloated. The developers on GitHub brag about "high performance". It's unbeliable.

In reality, small instances work because it has so many SQL performance problems that it mostly only is stable with little posts and comments in the database. They dd everything they could to avoid using Lemmy itself to discuss [email protected] topics and hang out on Matrix Chat to avoid using the constantly-crashing servers they created.

If you go to a server with no users creating comments and posts and only has a tiny amount of data, it does crash a lot less.

[–] [email protected] 2 points 1 year ago (22 children)

Oh geez, why would the Lemmy developers want to do any kind of discussions with you over at the [email protected] community, which you moderate?

load more comments (22 replies)
[–] [email protected] 2 points 1 year ago (6 children)

You're rocketderp based on your instability I'm guessing? Looks like your the only one that is nonsensical in that thread. Your PR isn't a PR. You were rushing to someone that was being helpful but you still acted like a child with your advise of GitHub bugs and PRs and not using them properly. The other person in agreement with you at least is calm and rational. I'm guessing you never worked on a group project before or have had anyone disagree with you. Your commits broke the pipeline. Someone even tried calling you down, but you wouldn't listen.

You aren't the victim here. You're the bully.

load more comments (6 replies)
load more comments (6 replies)
[–] [email protected] 1 points 1 year ago (1 children)

I moved to self hosting mine instance :)

[–] [email protected] 1 points 1 year ago
load more comments
view more: ‹ prev next ›