this post was submitted on 29 Jun 2023
251 points (100.0% liked)

Reddit Migration

84 readers
2 users here now

### About Community Tracking and helping #redditmigration to Kbin and the Fediverse. Say hello to the decentralized and open future. To see latest reeddit blackout info, see here: https://reddark.untone.uk/

founded 1 year ago
 

most of the time you'll be talking to a bot there without even realizing. they're gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate thousands of comments and posts, all to support your corporate agenda.

for example you can set it to hate a public figure and force negative commentary into conversations all over the site. you can set it to praise and recommend your latest product. like when a pharma company has a new pill out, they'll be able to target self-help subs and flood them with fake anecdotes and user testimony that the new pill solves all your problems and you should check it out.

the only real humans you'll find there are the shills that run the place, and the poor suckers that fall for the scam.

it's gonna be a shithole.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 12 points 1 year ago* (last edited 1 year ago) (1 children)

It's a bit like finding a single thread and unravelling it.
I used to get dozens of these things banned a day, there were a lot of us bot hunters reporting bots.

They sometimes sound "off", stop in mid sentence, reply to people as if they think it's the OP, reply as if they are OP, or post ๐Ÿ’ฏ by itself. Or they have a username that fits a recent bot pattern (e.g. appending "rp" to existing usernames)
.
If you see one slip up once, then looking at its other comments will often lead you to new bots simply because they are all attracted to the same positions (prominent but a few comments deep).

Certain subs like AITA and r/memes are more prone to them so I would go there for easy leads.

Also if you check its actual submissions, a karma laden bot will often repost hobby content, then have a second bot come and claim to have bought a t shirt or mug with that content and post a malicious link. Then a third bot will pose as another redditor saying thanks I just ordered one to the second bot. Following those bots leads you to even more bots, etc.

@XiELEd copying you in here.

[โ€“] [email protected] 2 points 1 year ago (1 children)

It makes you wonder if a ChatGPT bot could be automated to flag all these accounts. I'm sure that Reddit could have tagged and deleted the lot of them if they wanted to.

[โ€“] [email protected] 2 points 1 year ago* (last edited 1 year ago)

There must be such a lot of them. Accounts get sold on third party websites.