this post was submitted on 20 Jun 2023
108 points (98.2% liked)

Lemmy

12579 readers
42 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to [email protected].

founded 4 years ago
MODERATORS
 

Today, a bunch of new instances appeared in the top of the user count list. It appears that these instances are all being bombarded by bot sign-ups.

For now, it seems that the bots are especially targeting instances that have:

  • Open sign-ups
  • No captcha
  • No e-mail verification

I have put together a spreadsheet of some of the most suspicious cases here.

If this is affecting you, I would highly recommend considering one of the following options:

  1. Close sign-ups entirely
  2. Only allow sign-ups with applications
  3. Enable e-mail verification + captcha for sign-ups

Additionally, I would recommend pre-emptively banning as many bot accounts as possible, before they start posting spam!

Please comment below if you have any questions or anything useful to add.


Update: on lemm.ee, I have defederated the most suspicious spambot-infested instances.

To clarify: this means small instances with an unnaturally fast explosion in user counts over the past day and very little organic activity. I plan to federate again if any of these instances get cleaned up. I have heard that other instances are planning (or already doing) this as well.

It's not a decision I took lightly, but I think protecting users from spam is a very important task for admins. Full info here: https://lemm.ee/post/197715

If you're an admin of an instance that's defederated from lemm.ee but wish to DM me, you can find me on Matrix: @sunaurus:matrix.org

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 17 points 1 year ago (1 children)

This should be probably pinned.

[–] [email protected] 2 points 1 year ago
[–] [email protected] 9 points 1 year ago (1 children)

Thanks for the heads up, StarTrek.website has enabled CAPTCHA and purged the bots from our database.

[–] [email protected] 3 points 1 year ago

Starfleet takes changeling infiltrations seriously :P

[–] [email protected] 9 points 1 year ago (1 children)

Here we go: https://overseer.dbzer0.com/

API doc: https://overseer.dbzer0.com/api/

curl -X 'GET' \
  'https://overseer.dbzer0.com/api/v1/instances' \
  -H 'accept: application/json'

Will spit out suspicious instances based on fediverse.observer . You can adjust the threshold to your own preference.

[–] [email protected] 4 points 1 year ago (1 children)

Nice! Would be cool if you could also include current statuses of captchas, emails, and application requirements.

[–] [email protected] 3 points 1 year ago (1 children)

Tell me how to fetch them and it will. ;)

[–] [email protected] 4 points 1 year ago (1 children)

I think the easiest option is to just iterate through the list of suspicious instances, and then check {instance_url}/api/v3/site for each of them. Relevant keys of the response json are site_view.local_site.captcha_enabled, site_view.local_site.registration_mode, and site_view.local_site.require_email_verification.

Since it's a bunch of separate requests, probably it makes sense to do these in parallel and probably also to cache the results at least for a while.

[–] [email protected] 3 points 1 year ago

It occurs to me that this kind of thing is better left to observer, as it's set up to poll instances and gather data. I would suggest you ask them to ingest and expose this data as well

[–] [email protected] 8 points 1 year ago (1 children)

CAPTCHA is the bare minimum. Who the hell turns it off?

[–] [email protected] 6 points 1 year ago (6 children)

There is an argument to be made that captchas can be automatically bypassed with some effort.

OTOH, the current wave of bots is quite clearly favoring instances with captcha disabled, so clearly it's acting as at least a small deterrent.

[–] [email protected] 7 points 1 year ago (1 children)

Sometimes, security just means not being the low-hanging fruit.

[–] [email protected] 4 points 1 year ago

Doing no captcha is like leaving the door open, hoping no-one breaks in, instead of at least closing the door (a closed door decreases chance of break in by near 100%, even if it's not locked)

[–] [email protected] 1 points 1 year ago

Some advanced OCR can hack the easier ones, but it's unusual.

load more comments (4 replies)
[–] [email protected] 8 points 1 year ago (2 children)

99% of fedi instances should require sign-ups with applications and email. It does not make sense to let in users indiscriminately unless you have a 24h staff in charge of moderation.

[–] [email protected] 16 points 1 year ago (2 children)

We're trying to capture the reddit refugees as well. It's a fine-line to walk.

[–] [email protected] 8 points 1 year ago (1 children)

Email + Captcha should be doable right?

[–] [email protected] 7 points 1 year ago

yes, that's the bare minimum until we get better toolset

[–] [email protected] 4 points 1 year ago (1 children)

Agreed. An application that must be human reviewed is a very large gate that many people will see and just close the site. Myself included.

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago

Email verification + captcha should be enough. The application part is cringe and a bad idea, unless you really want to be your own small high school clique and don't have any growth ambitions, which is perfectly fine but again should not be expected from general instances looking to welcome Redditors.

[–] [email protected] 8 points 1 year ago (1 children)

This might be related but I've noticed that someone is [likely automatically] following my posts and downvoting them. Kind of funny in a 'verse without karma.

[–] [email protected] 3 points 1 year ago (1 children)

Karma may mean nothing but the information space is a strategic domain.

load more comments (1 replies)
[–] [email protected] 5 points 1 year ago
[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

It was brought to my attention that my instance was hit with the spam bots regs. I've disabled registration and deleted the accounts from the DB. is there anything else I can do to clear the user stats on the sidebar? EDIT: I have reversed the stats too.

[–] [email protected] 3 points 1 year ago

You can do this by updating site_aggregates.users in your database (WHERE site_id = 1)

[–] [email protected] 4 points 1 year ago (1 children)

I'm noobish, but could they be defederated until they get their act together before they spam everybody?

[–] [email protected] 4 points 1 year ago

Yes, and I believe some instances are already doing this

[–] [email protected] 4 points 1 year ago (2 children)

Sounds like a spez sponsored attack on Lemmy.

[–] [email protected] 4 points 1 year ago

Or just the unavoidable spam bot accounts coming as long as it's easy and the instance operators being still unprepared.

[–] [email protected] 2 points 1 year ago (1 children)

I highly doubt spez did this. Reddit is currently doing fine. Even if it all goes away he's sitting on over a decade of genuine human conversations he can sell to AI companies and make millions. He isn't worried.

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

Any tips on how to get rid of all the spam accounts? I have been affected by this as well and thankfully captcha stopped them, but about 100 bots signed up before I could stop.

Normally i'd just look through all the accounts and pick out the 4 or so users that are real. But there is no apparent way to view every user account as an admin.

Edit: There is a relevant issue open on the lemmy-ui repo, for those interested: https://github.com/LemmyNet/lemmy-ui/issues/456

[–] [email protected] 2 points 1 year ago (2 children)

Did you figure out how to clean it up? You can see a list of users in your local_user table.

load more comments (2 replies)
[–] [email protected] 2 points 1 year ago (1 children)

Fun fact, they're removing Captcha in the next release.

I won't be upgrading and I anticipate I'll be defederating with any instance that upgrades to v0.18.

Source - https://github.com/LemmyNet/lemmy/issues/2922

[–] [email protected] 2 points 1 year ago

That is true, but because of the recent spam wave there is also an issue to re-add captcha. https://github.com/LemmyNet/lemmy/issues/3200

We'll just have to see how it all shakes out.

[–] [email protected] 4 points 1 year ago

One thing I like about lemmy was having to put in an application and waiting for approval. I knew I was vetted and others here were too.

Figure that alone could keep out most of the trolls and definitely the bots.

[–] [email protected] 3 points 1 year ago

I know from talking to admins when pbpBB was really popular that fighting spammers and unsavory bots was the big workload in running a forum. I'd expect the same for Fediverse instances. I hope a system can be worked out to make it manageable.

As a user I don't have a big problem with mechanisms like applications for the sake of spam control. It's hugely more convenient when an account can be created instantaneously, but I understand the need.

I do wonder how the fediverse is going to deal with self-hosting bad actors. I would think some kind of vetting process for federation would need to exist. I suppose you could rely on each admin to deal with that locally, but that does not sound like an efficient or particularly effective solution.

[–] [email protected] 3 points 1 year ago (1 children)

I'm sure it's different per instance, but is there any discussion on what is being done with the collected emails?

I understand the need to fight bots and spam, but there are also those of us who don't want to associate emails with accounts so some privacy-related way of handling this would be appreciated.

[–] [email protected] 2 points 1 year ago (1 children)

there's plenty of services that provide one-use emails or disposable ones

[–] [email protected] 1 points 1 year ago

True, I use one myself.

That's a cool instance you're running over there, by the way! I appreciate it.

[–] [email protected] 2 points 1 year ago (1 children)
[–] [email protected] 2 points 1 year ago (1 children)

Are you already defederating from suspicious instances? If not, What are you planning to do?

load more comments (1 replies)
[–] [email protected] 2 points 1 year ago

Today, a bunch of new instances appeared in the top of the user count list. It appears that these instances are all being bombarded by bot sign-ups.

Yup, I noticed this as well.

Hopefully the mods of the instances will notice this and remove these accounts quickly! Despite this, I think the mods of all instances, and of all communities, had better brace themselves for incoming spam and hate speech.

[–] [email protected] 1 points 1 year ago (1 children)

Maybe this is what's implied or I'm just being silly; What is to stop a bad actor spinning up a Lemmy instance, creating a bunch of bot accounts with no restrictions, and spamming other instances? Would the only route of action be for the non spam instances to individually defederate the spam ones? Seems like that would be a bit of a cat and mouse situation. I'm not too familiar with the inner workings and tools that Lemmy has that would be useful in this situation

[–] [email protected] 4 points 1 year ago

They can do this, and it is cat and mouse. But...

  1. It generally costs money to stand up an instance. It often requires a credit-card, which reduces anonymity. This will dissuade many folks.
  2. A malicious instance can be defederated, so it might not be all that useful.
  3. People can contact the security team at the host providing infra/internet to the spammer. Reputable hosts will kill the account of a spammer, which again is harder to duplicate if the host requires payment and identity info.
  4. Malicious hosts that fail to address repeated abuse reports can be ip-blocked.
  5. Eventually Lemmy features can be built to protect against this kind of thing by delaying federation, requiring admin approval, or shadow -banning them during a trial period.

Email has shown us that there's a playbook that kind of works here, but it's not easy or pleasant.

[–] [email protected] 1 points 1 year ago

I suspect that there's going to need to be some analysis software that can run on the kbin and lemmy server logs looking for suspicious stuff.

Say, for instance, a ton of accounts come from one IP. That's not a guarantee that they're malicious -- like, could be some institution that NATs connections or something. But it's probably worth at least looking at, and if someone signed up 50 accounts from a single IP, that's probably at least worth red-flagging to see if they're actually acting like a normal account. Especially if the email provider is identical (i.e. they're all from one domain).

Might also want to have some kind of clearinghouse for sharing information among instance admins about abuse cases.

One other point:

I would recommend pre-emptively banning as many bot accounts as possible,

A bot is not intrinsically a bad thing. For example, I was suggesting yesterday that it would be neat if there was a bot running that posted equivalent nitter.net links in response to comments providing twitter.com links, for people who want to use those. There were a number of legitimately-helpful bots that ran on Reddit -- I personally got a kick out of the haiku bot, that mentioned to a user when their comment was a haiku -- and legitimately-helpful bots that run on IRC.

Though perhaps it would be a good idea to either adopt a convention ("bots must end in "Bot") or have some other way for bots to disclose that they are bots and provide contact information for a human, in case they malfunction and start causing problems.

But if someone is signing up hordes of them, then, yeah, that's probably not a good actor. Shouldn't need a ton of accounts for any legit reason.

[–] [email protected] 1 points 1 year ago

Every time I see that moustache I know to pay attention!

load more comments
view more: next ›