68
submitted 1 week ago by [email protected] to c/[email protected]

Every political thread is chock full of people being angry and unreasonable. I did some data mining, and most of the hate is coming from a very small percentage of the community, and the rest of the community is very consistent in downvoting them.

The problem is that even with human moderators enforcing a series of rules, most of those people are still in the comments making things miserable. So I made a bot to do it instead.

[email protected] is a bot that uses an algorithm similar to PageRank to analyze the Lemmy community, and preemptively bans about 1-2% of posters, that consistently get a negative reaction a lot of the time. Take a look at an example of the early results. See how nice that is? It's just people talking, and when they disagree, they say things like "clearly that part is wrong" and "your additions are good information though."

It's too early to tell how well it will work on a larger scale, but I'm hopeful. So, welcome to my experiment. Let's talk politics without all the abusive people coming into the picture too. Please come in and test if this thing can work in the long run.

Pleasant Politics

[email protected]

all 37 comments
sorted by: hot top controversial new old
[-] [email protected] 38 points 1 week ago* (last edited 1 week ago)

I'm not interested in being polite to people who want to take away rights, promote discrimination, and try to overthrow elections.

The idea sounds great in theory, but seems like a bad idea with the massive rise in fascism.

[-] [email protected] 17 points 1 week ago

I completely agree with you on that. "Pleasant" might have been a misleading way for me to frame the community. As far as the bot is concerned, you're free to be as unfriendly to fascists as you want.

As a matter of fact, part of what I think is wrong with the current moderation model is the emphasis on "civility." I think you should be allowed to be unfriendly.

I'll give an example: I spent some time talking with existing moderators as I was tweaking and testing the bot, and we got in a discussion about two specific users. One of them, the bot was banning, and the other it wasn't. The moderator I was talking with pointed it out and said that my bot was getting it backwards, because the one user was fine, and the other user was getting in arguments and drawing a lot of user reports. I looked at what was going on, and pointed out that the first user was posting some disingenuous claims that were drawing tons of hate and disagreement from almost the entire rest of the community, that would start big arguments that didn't go anywhere. The second user was being rude sometimes, but it was a small issue from the point of view of the rest of the community, and usually I think the people they were being rude to were in the wrong anyway.

The current moderation model leaves the first user alone, even if they want to post their disingenuous stuff ten times a day, and dings the second user because they are "uncivil." I think that's backwards. Of course if someone's being hostile to everyone, that's a problem, but I think a lot of bad behavior that makes politics communities bad doesn't fit the existing categories for moderation very well, and relying on volunteer moderators who are short on time to make snap judgements about individual users and comments is not a good approach to applying the rules even as they are.

So come in and be impolite to the fascists. Go nuts. You don't have to be pleasant in that sense. In fact, I think you'll probably have more freedom to do that here than in other communities.

[-] [email protected] 17 points 1 week ago* (last edited 1 week ago)

I think any experiment that could potentially filter out bad faith participants is at least worth a try. I participate in political discussions pretty infrequently, but when reading them I often see users jumping in with a ridiculous viewpoint that they are completely unwilling to discuss or hear any flaws about. That's not conversation, that's trying to shout others down, and I will be interested to see if that kind of behavior gets caught by your bot.

[-] [email protected] 8 points 1 week ago

I know exactly what you mean. If I had to pick one type of comment that the bot is designed to ban for, those are them. It turns out to be pretty easy to do, too, because the community usually downvotes those comments very severely, even if the current moderation rules allow them even when someone does them 20 times a day.

Pick a name of someone you've seen do that, search the modlog on slrpnk.net, and I think you will find them banned by Santa. And, if they're not, DM me their username, because there might be some corner case in the parameter tuning that I have missed.

[-] [email protected] 8 points 1 week ago* (last edited 1 week ago)

Disturbing.

Algorithmically censored and sanitised political speech.

Thats gonna be a no from me, frankly Id like to see such a community banned for the harm it’s going to cause. Its bad enough we have that nonsense on other social media.

[-] [email protected] 7 points 1 week ago

I know this will ring hollow, considering I am (predictably) on the autoban list, but:

I don't know how this isn't a political-echochamber speedrun any%. People downvote posts and comments for a lot of reasons, and a big one (maybe the biggest one in a political community) is general disagreement/dislike, even simply extreme abstract mistrust. This is basically just crowdsourced vibes-based moderation.

Then again, I think communities are allowed to moderate/gatekeep their own spaces however the like. I see little difference between this practice and .ml or lemmygrad preemptively banning users based on comments made on other communities. In fact, I expect the same bot deployed on .ml or hexbear would end up banning the most impassioned centrist users from .world and kbin, and it would result in an accelerated silo-ing of the fediverse if it were applied at scale. Each community has a type of user they find the most disagreeable, and the more this automod is allowed to run the more each space will end up being defined by that perceived opposition.

Little doubt I would find the consensus-view unpalatable in a space like that, so no skin off my nose.

[-] [email protected] 8 points 1 week ago

I looked at the bot's judgements about your user. The issue isn't your politics. Anti-center or anti-Western politics are the majority view on Lemmy, and your posts about your political views get ranked positively. The problem is that somehow you wind up in long heated arguments with "centrists" which wander away from the topic and get personal, where you double down on bad behavior because you say that's the tactic you want to employ to get your point across. That's the content that's getting ranked negatively, and often enough to overcome the weight of the positive content.

If Lemmy split into a silo that was the 98.6% of users that didn't do that, and a silo of 1.4% of users that wanted to do that, I would be okay with that outcome. I completely agree with your concern in the abstract, but that's not what's happening here.

[-] [email protected] 2 points 1 week ago

The problem is that somehow you wind up in long heated arguments with “centrists” which wander away from the topic and get personal

I'm not surprised I was identified by the bot, but it's worth pointing out that ending up in heated arguments happens because people disagree. Those things are related. If someone is getting into lots of lengthy disagreements that are largely positive but devolve into the unwanted behavior, doesn't that at least give legitimacy to the concern that dissenting opinions are being penalized simply because they attract a lot of impassioned disagreement? Even if both participants in that disagreement are penalized, that just means any disagreement that may already be present isn't given opportunity to play out. Your community would just be lots of people politely agreeing not to disagree.

I have no problem with wanting to build a community around a particular set of acceptable behaviors -I don't even take issue with trying to quantify that behavior and automating it. But we shouldn't pretend as if doing so doesn't have unintended polarizing consequences.

A community that allows for disagreement but limits argumentation isn't neutral - it gives preferences to status-quo and consensus positions by limiting the types of dissent allowed. If users aren't able to resolve conflicting perspectives through argumentation, then the consensus view ends up being left uncontested (at least not meaningfully). That isn't a problem if the intent of the community is to enforce decorum so that contentious argumentation happens elsewhere, but if a majority of communities utilizes a similar moderation policy then of course it is going to result in siloing.

I might also point out that an argument that is drawn out over dozens of comments and ends in that 'unwanted' behavior you're looking for isn't all that visible to most users; if you're someone who is trying to avoid 'jerks' then I would think the relative nested position/visibility of that activity should be important. I'm not sure how your bot weighs activity against that visibility, but I think even that doubt that brings into question the effectiveness of this as a strategy.

Again, not challenging the specific moderation choices the bot has made, just pointing out the problem of employing this type of moderation on a large scale. As it has been employed in this particular community is interesting.

[-] [email protected] 7 points 1 week ago

Do you mind if I give some examples? What you're saying is valid in the abstract, but I think pointing out concrete examples of what the bot is reacting to will shed some light on what I'm talking about.

[-] [email protected] -1 points 1 week ago

You're free to provide examples, but like I said it's not the specific moderation choices that are the problem, it's using public sentiment as a core part of that determination.

[-] [email protected] 3 points 1 week ago

Here are examples of things you got positive rank for, politics and argumentation:

Here are examples of things you got negative rank for, not directly political interpersonal squabbling:

Maybe this is harsh, but I think this is a good decision by the bot. The first list is fine. Most of your political views are far from unpopular on Lemmy. The thing is that you post a lot more of the squabbling content than the political content. You said you're being unpleasant on purpose, don't plan to stop, and that people should probably block you. I feel okay about excluding that from this community.

If in the future you change your mind about how you want to converse, you can send a comment or DM. We can talk about it, make sure you're not being targeted unfairly, but in the meantime this is completely fair.

[-] [email protected] 1 points 1 week ago

I already said I don't take issue with any one decision, I care about the macro social implications.

[-] [email protected] 1 points 1 week ago

I made this system because I, also, was concerned about the macro social implications.

Right now, the model in most communities is banning people with unpopular political opinions or who are uncivil. Anyone else can come in and do whatever they like, even if a big majority of the community has decided they're doing more harm than good. Furthermore, when certain things get too unpleasant to deal with on any level anymore, big instances will defederate from each other completely. The macro social implications of that on the community are exactly why I want to try a different model, because that one doesn't seem very good.

You seem to be convinced ahead of time that this system is going to censor opposing views, ignoring everything I've done to address the concern and indicate that it is a valid concern. Your concern is noted. If you see it censoring any opposing views, please let me know, because I don't want it to do that either.

[-] [email protected] 0 points 1 week ago

Right now, the model in most communities is banning people with unpopular political opinions or who are uncivil. Anyone else can come in and do whatever they like, even if a big majority of the community has decided they’re doing more harm than good.

You don't need a social credit tracking system to auto-ban users if there's a big majority of the community that recognizes the user as problematic: you could manually ban them, or use a ban voting system, or use the bot to flag users that are potentially problematic to assist on manual-ban determinations, or hand out automated warnings.... Especially if you're only looking at 1-2% of problematic users, is that really so many that you can't review them independently?

Users behave differently in different communities.... Preemptively banning someone for activity in another community is already problematic because it assumes they'd behave in the same way in the other, but now it's for activity that is ill-defined and aggregated over many hundreds or thousands of comments. There's a reason why each community has their rules clearly spelled out in the side, it's because they each have different expectations and users need those expectations spelled out if they have any chance of following them.

I'm sure your ranking system is genius and perfectly tuned to the type of user you find the most problematic - your data analysis genius is noted. The problem with automated ranking systems isn't that they're bad at what they claim to be doing, it's that they're undemocratic and dehumanizing and provide little recourse for error, and when applied at large scales those problems become amplified and systemic.

You seem to be convinced ahead of time that this system is going to censor opposing views, ignoring everything I’ve done to address the concern and indicate that it is a valid concern.

That isn't my concern with your implementation, it's that it limits the ability to defend opposing views when they occur. Consensus views don't need to be defended against aggressive opposition, because they're already presumed to be true; a dissenting view will nearly always be met with hostile opposition (especially when it regards a charged political topic), and by penalizing defenses of those positions you allow consensus views remain unopposed. I don't particularly care to defend my own record, but since you provided them it's worth pointing out that all of the penalized examples you listed of my user were in response to hostile opposition and character accusations. The positively ranked comments were within the consensus view (like you said), so of course they rank positively. I'm also tickled that one of them was a comment critiquing exactly the kind of arbitrary moderation policies like the one you're defending now.

f you see it censoring any opposing views, please let me know, because I don’t want it to do that either.

Even if I wasn't on the ban list and could see it I wouldn't have any interest in critiquing its ban choices because that isn't the problem I have with it.

[-] [email protected] 7 points 1 week ago

Interesting concept

[-] [email protected] 7 points 1 week ago

Really cool idea! I agree on the whole most people are incredibly nice and will go out of their way to explain their reasoning. But a small, loud group seems to crop up in political discussions. It's interesting because it's not always they they don't know they're being rude, but rather they know and are proud of it because of their beliefs.

[-] [email protected] 4 points 1 week ago

It was remarkable, when I started looking at it, how small the population of users is that seem to be causing almost all of the problems. It was also remarkable how little the existing moderation approach is doing to rein them in.

[-] [email protected] 6 points 1 week ago

At first I read it as "peasant"

Disappointed I read it wrong

[-] [email protected] 5 points 1 week ago

I've already declined two reports requesting that I take moderator action against content that's people directly going out into their community and helping get things done, because that is "not politics." People definitely seem to want their mods to be vigorously engaged in enforcing the boundaries on the stuff people are allowed to say.

As far as my take on it, we can have overlap between the peasant politics and the pleasant politics. The community was for the latter, but the former sounds great, too.

[-] [email protected] 4 points 1 week ago

I did some data mining

Can you share the model?

[-] [email protected] 11 points 1 week ago

The code for the bot is open source. It's not an AI model. It's based on a classical technique for analyzing networks of relative trust and turning them into a master list of community trust, combined with a lot of studying its output and tweaking parameters. The documentation is sparse, but if someone is skilled in these things they can probably take a few hours to study it and its conclusions and see what's going on.

If you're interested in looking at it for real, I can write some better documentation for the algorithm parts, which will probably be necessary to make sense of it beyond the surface level.

[-] [email protected] 2 points 1 week ago

Thanks you, I'm personally more interested on the statistics used on the parameter searching, but given that is python I'm checking out to see what can I learn.

[-] [email protected] 4 points 1 week ago

Don't let the python fool you. It is not simple python. I'll try to add some comments later on to make it more clear what's going on.

For tuning parameters, it was complicated. Mostly, I did spot-checks on random users at different ranking levels, to try to check that the boundary for banning matched up pretty well with what I thought was the boundary of an acceptable level of jerkishness. That, combined with deeper dives into which comments had made what contributions to the user's overall rankings. And then talking with existing moderators, looking over the banlists, and bringing up users where they thought the bot was getting it wrong. There were a lot of corner cases and fixes to the parameters to fix the corner cases. Sometimes it was increasing SMOOTHING_FACTOR to make users more equal in rank with each other, when we found some user that was banned because of one bad interaction with some high-rank person who downvoted them. Sometimes it was changing parameters to change how easy it is to overcome a few negatively-ranked postings by being generally positive with the rest of your postings. There are always users for which the right answer is a matter for debate or opinion, but as long as the bot isn't making decisions that are clearly wrong, I think it's doing pretty well.

You can look over some places where I talked with people about the bot's opinion of their user, in this post and this post. I don't want to publicly do those breakdowns for people who haven't agreed to have it done to them, but that might give you an idea of how the tuning went. What I did to tune the parameters was the same type of thing as I showed in those comments, just a whole lot more of it.

[-] [email protected] 4 points 1 week ago

I added an explanation of the details of how it works to the source file that implements the main rank algorithm. The math behind it is not simple, but it's also not rocket science, if you have some data science abilities and want to check it out.

[-] [email protected] 3 points 1 week ago

What are its rules? I don't see anything in the sidebar.

[-] [email protected] 2 points 1 week ago

It's in the sidebar:

Post political news, or your own opinions or discussion. Anything goes. No personal attacks, no bigotry, no spam. Those will get a manual temporary ban.

[-] [email protected] 3 points 1 week ago
[-] [email protected] 3 points 1 week ago

Have you prepared for downvote manipulation by bots? Quora incentivizes it by treating downvoted answers differently, so now the site may have as many bots as people.

[-] [email protected] 3 points 1 week ago

It's difficult. A downvote from an account with no history does nothing. Your bot has to post a lot of content first to attract upvotes from genuine accounts. Then once you've accumulated some rank, you can start giving upvotes or downvotes in bulk to the accounts you want to manipulate. It's impossible to completely prevent that, but you have to do it a lot to have an impact.

I think this model is more resistant to trickery than it would seem, but it's not completely resistant. I do expect some amount of trickery that will then need counter-trickery. On the other hand, the problem of tricking the system also exists in the current moderation model. You don't have to outwit the system to get your content posted or ban your enemy if it's trivial to flood the comment section with your content from alt accounts and drown them out instead. I don't know for sure that something like that is happening, but it wouldn't surprise me if that was one reason why there are so many obnoxiously vocal people.

[-] [email protected] 1 points 1 week ago

Why is Nixon the thumbnail for it? Especially as it's futuramas version who is anything but pleasant

[-] [email protected] 0 points 1 week ago

Look at him, he's so happy.

Maybe it should be Bernie smiling, instead? I didn't want to be openly partisan.

[-] [email protected] 1 points 6 days ago

Use clip art of some debate podiums or something?

[-] [email protected] 4 points 1 week ago

The point is if you want a pleasant community don't use any polarizing figures

[-] [email protected] -2 points 1 week ago

I'm interested and curious to see what happens with such a setup. I wonder if I will end up on the ban list as a result of mostly participating in the conservative community and being contrary.

[-] [email protected] 3 points 1 week ago

You're not banned or even close to it. The ban list is surprisingly lenient in terms of people's differing political views. You have to habitually make enemies of a lot of the people in the comments, one way or another, with a big fraction of what you post. Most people don't do that, wherever on the political spectrum they might fall.

Whether that's a good idea or not remains to be seen. I had some surprises today.

this post was submitted on 06 Jul 2024
68 points (89.5% liked)

New Communities

16453 readers
60 users here now

A place to post new communities all over Lemmy for discovery and promotion.

Rules

The rules may be more established as time goes on, but it's important to have a foundation to work on.

1. Follow the rules of Lemmy.world - These rules are the same as Mastodon.world's rules, which can be found here.

2. Include a community title and description in your post title. - A following example of this would be New Communities - A place to post new communities all over Lemmy for discovery and promotion.

3. Follow the formatting. - The formatting as included below is important for people getting universal links across Lemmy as easily as possible.

Formatting

Please include this following format in your post:

[link text](/c/[email protected])

This provides a link that should work across instances, but in some cases it won't

You should also include either:

[email protected]

or instance.com/c/community

FAQ:

Q: Why do I get a 404?

A: At least one user in an instance needs to search for a community before it gets fetched. Searching for the community will bring it into the instance and it will fetch a few of the most recent posts without comments. If a user is subscribed to a community, then all of the future posts and interactions are now in-sync.

Q: When I try to create a post, the circle just spins forever. Why is that?

A: This is a current known issue with large communities. Sometimes it does get posted, but just continues spinning, but sometimes it doesn't get posted and continues spinning. If it doesn't actually get posted, the best thing to do is try later. However, only some people seem to be having this problem at the moment.

Image Attribution:

Fahmi, CC BY 4.0 https://creativecommons.org/licenses/by/4.0, via Wikimedia Commons

founded 1 year ago
MODERATORS