shocked shocked
Fediverse
A community dedicated to fediverse news and discussion.
Fediverse is a portmanteau of "federation" and "universe".
Getting started on Fediverse;
- What is the fediverse?
- Fediverse Platforms
- How to run your own community
Fake/bot accounts have always existed. How many times has a "YouTuber" ran a "giveaway" in their comments section?
Yes but you presumably had to go through a captcha to make each one, whereas here someone can spin up an instance and 'create' 1 million accounts immediately.
Did anyone ever claim that the Fediverse is somehow a solution for the bot/fake vote or even brigading problem?
You mean to tell me that copying the exact same system that Reddit was using and couldn’t keep bots out of is still vuln to bots? Wild
Until we find a smarter way or at least a different way to rank/filter content, we’re going to be stuck in this same boat.
Who’s to say I don’t create a community of real people who are devoted to manipulating votes? What’s the difference?
The issue at hand is the post ranking system/karma itself. But we’re prolly gonna be focusing on infosec going forward given what just happened
I don't have experience with systems like this, but just as sort of a fusion of a lot of ideas I've read in this thread, could some sort of per-instance trust system work?
The more any instance interacts positively (posting, commenting, etc.) with main instance 'A,' that particular instance's reputation score gets bumped up on main instance A. Then, use that score with the ratio of votes from that instance to the total amount of votes in some function in order to determine the value of each vote cast.
This probably isn't coherent, but I just woke up, and I also have no idea what I'm talking about.
Something like that already happened on Mastodon! Admins got together and marked instances as "bad". They made a list. And after a few months, everything went back to normal. This kind of self organization is normal on the fediverse.
People may not like it but a reputation system could solve this. Yes, it's not the ultimate weapon and can surely be abused itself.
But it could help to prevent something like this.
How could it work? Well, each server could retain a reputation score for each user it knows. Every up- or downvote is then modified by this value.
This will not solve the issue entirely, but will make it less easy to abuse.
Ok, but what would the reputation score be based on that can't be manipulated or faked?
Well, you see Kif, my strategy is so simple an idiot could have devised it: reputation is adjusted by "votes" so that other users can up or downvote another.
Thus solving the problem, once and for all.
I wonder if it's possible ...and not overly undesirable... to have your instance essentially put an import tax on other instances' votes. On the one hand, it's a dangerous direction for a free and equal internet; but on the other, it's a way of allowing access to dubious communities/instances, without giving them the power to overwhelm your users' feeds. Essentially, the user gets the content of the fediverse, primarily curated by the community of their own instance.
I would imagine this is the same with bans I imagine there will be a future reputation watchdog set of servers which might be used over this whole everyone follows the same modlog. The concept of trust everyone out of the gate seems a little naive
I wonder if there's a machine learning technique that can be used to detect bot-laden instances.
I‘m not a fan of up- and downvotes, also but not only for the aforementioned reasons. Classic forums ran fine without any of it.
Classic forums still exist.
Voting does allow the cream to rise to the top, which is why reddit was much better than a forum.
Honestly, I think part of the problem is that companies don't have an incentive to fight bots or spam: higher numbers of users and engagement make them look better to investors and advertisers.
I don't think it's that difficult of a problem to solve. It should be quite possible to detect patterns between real users and bots.
We will see how the fediverse handles it.
Here’s an idea: adjust the weights of votes by how predictable they are.
If account A always upvotes account B, those upvotes don’t count as much—not just because A is potentially a bot, but because A’s upvotes don’t tell us anything new.
If account C upvotes a post by account B, but there was no a priori reason to expect it to based on C’s past history, that upvote is more significant.
This could take into account not just the direct interactions between two accounts, but how other accounts interact with each of them, whether they’re part of larger groups that tend to vote similarly, etc.
Thank you for this. I'd upvote you, but you've already taken care of that.