Lemmy.zip

2,761 readers
261 users here now

Welcome to Lemmy.zip - a community for like minded people to come and have a chat about almost anything. From games to tech, to anything else, come and have a chat.

If you're new and would like to join Lemmy.zip, please fill in the sign up form. Email verification is required. (Please check your spam folder!)

Once you're signed up, come and introduce yourself in our Home community!


Useful Links


Instance Rules

To maintain the high standard of discourse and interaction we all value, each user must adhere to the guidelines outlined in our Code of Conduct. This set of rules is designed not just to maintain order but also to ensure a safe and inclusive environment for everyone to share their thoughts and ideas.

What to Expect in Our Code of Conduct:

If you enjoy reading legal stuff, you can check out legal.lemmy.zip.


Funding

If you would like to contribute to the upkeep of Lemmy.zip, please head over to OpenCollective.
Anything you're happy to donate is very highly appreciated!
You'll even get your name in the Thank You thread.

Open Collective backers

If you want to use PayPal, you can donate via Ko-Fi:


Server

Uptime


founded 1 year ago
ADMINS
1
 
 

cross-posted from: https://lemmygrad.ml/post/5324777

Interesting...

How does a country like Vietnam or China tackle AI?

Frankly, AI might have its uses, and I've found it useful here and there, but perhaps the cons outweigh the pros...

2
 
 

Interesting...

How does a country like Vietnam or China tackle AI?

Frankly, AI might have its uses, and I've found it useful here and there, but perhaps the cons outweigh the pros...

3
4
 
 

Here's a good & readable summary paper to pin your critiques on

5
 
 

Abstract: Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.


Large language models, like advanced chatbots, can generate human-like text and conversations. However, these models often produce inaccurate information, which is sometimes referred to as "AI hallucinations." Researchers have found that these models don't necessarily care about the accuracy of their output, which is similar to the concept of "bullshit" described by philosopher Harry Frankfurt. This means that the models can be seen as bullshitters, intentionally or unintentionally producing false information without concern for the truth. By recognizing and labeling these inaccuracies as "bullshit," we can better understand and predict the behavior of these models. This is crucial, especially when it comes to AI companionship, as we need to be cautious and always verify information with informed humans to ensure accuracy and avoid relying solely on potentially misleading AI responses.

by Llama 3 70B

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/technology by /u/ShadowBannedAugustus on 2024-06-15 15:24:37+00:00.

7
 
 

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

view more: next ›