this post was submitted on 06 Apr 2024
102 points (98.1% liked)

BecomeMe

799 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 1 year ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 18 points 5 months ago* (last edited 5 months ago) (1 children)

Most people had a hard enough time telling the difference between man made fact and fiction, now they have to tell the difference between AI fact and fiction on top.

[–] [email protected] 6 points 5 months ago (1 children)

Well AI fact, in this use has always been made up of a combination of man's fact and fiction. Nobody's been smart enough to make an AI that can reliably separate the two, to my knowledge.

[–] [email protected] 1 points 5 months ago (1 children)

It's all about cleaning datasets. For forecasting models, you need to occasionally remove certain historical data to increase accuracy.

The same could work here, but it's obviously at a significantly larger scale and crosses into every interest and discipline.

I believe the solution is curated data models with the top members of the applicable field determining validity or a stack overflow model.

We should basically have a "clean" copy of the internet that is always 3-6 months behind as it is only added with quality data.

[–] [email protected] 1 points 5 months ago

I believe the solution is curated data models with the top members of the applicable field determining validity or a stack overflow model.

I think you're on the right track here, but will retain the same flaws ultimately this way.

Personally, I believe the models should be open and all interested parties have varying degrees of influence over the accepted truth. That's going to be a complicated in itself.

By limiting it to "trusted people", you only have to corrupt enough of them, and eventually you end up with the same shitty problems, but with bots too.