this post was submitted on 20 Sep 2023
528 points (93.3% liked)

Technology

59147 readers
2835 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The majority of U.S. adults don't believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 57 points 1 year ago (6 children)

Most of the U.S. adults also don't understand what AI is in the slightest. What do the opinions of people who are not in the slightest educated on the matter affect lol.

[–] [email protected] 60 points 1 year ago (4 children)

"What do the opinions of people who are not in the slightest educated on the matter affect"

Judging by the elected leaders of the USA: quite a lot, in fact.

load more comments (4 replies)
[–] [email protected] 47 points 1 year ago (9 children)

You don’t have to understand how an atomic bomb works to know it’s dangerous

[–] [email protected] 26 points 1 year ago (6 children)

Prime example. Atomic bombs are dangerous and they seem like a bad thing. But then you realize that, counter to our intuition, nuclear weapons have created peace and security in the world.

No country with nukes has been invaded. No world wars have happened since the invention of nukes. Countries with nukes don't fight each other directly.

Ukraine had nukes, gave them up, promptly invaded by Russia.

Things that seem dangerous aren't always dangerous. Things that seem safe aren't always safe. More often though, technology has good sides and bad sides. AI does and will continue to have pros and cons.

[–] [email protected] 42 points 1 year ago (1 children)

Atomic bomb are also dangerous because if someone end up launching one by mistake, all hell is gonna break loose. This has almost happened multiple times:

https://en.wikipedia.org/wiki/List_of_nuclear_close_calls

We've just been lucky so far.

And then there are questionable state leaders who may even use them willingly. Like Putin, or Kim, maybe even Trump.

[–] [email protected] -2 points 1 year ago

…and the development and use of nuclear power has been one of the most important developments in civil infrastructure in the last century.

Nuclear isn’t categorically free from the potential to harm, but it can also do a whole hell of a lot for humanity if used the right way. We understand it enough to know how to use it carefully and safely in civil applications.

We’ll probably get to the same place with ML… eventually. Right now, everyone’s just throwing tons of random problems at it to see what sticks, which is not what one could call responsible use - particularly when outputs are used in a widespread sense in production environments.

[–] [email protected] 21 points 1 year ago* (last edited 1 year ago)

If you're from one of the countries with nukes, of course you'll see it as positive. For the victims of the nuke-wielding countries, not so much.

[–] [email protected] 11 points 1 year ago

That’s a good point, however just because the bad thing hasn’t happened yet, doesn’t mean it wont. Everything has pros and cons, it’s a matter of whether or not the pros outweigh the cons.

[–] [email protected] 4 points 1 year ago

I don't disagree with your overall point, but as they say, anything that can happen, will happen. I don't know when it will happen; tomorrow, 50 years, 1000 years... eventually nuclear weapons will be used in warfare again, and it will be a dark time.

[–] [email protected] 0 points 1 year ago

No world wars have happened since the invention of nukes

Except the current world war.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

You need to understand to correctly classify the danger though.

Otherwise you make stupid decisions such as quiting nuclear energy in favor of coal because of an incident like Fukushima even though that incident just had a single casualty due to radiation.

[–] [email protected] -3 points 1 year ago (1 children)

I'm over here asking chatGPT for help with a pandas dataframe and loving every minute of it. At what point am I going to feel the effects of nuclear warfare?

[–] [email protected] 19 points 1 year ago (1 children)

I’m confused how this is relevant. Just pointing out this is a bad take, not saying nukes are the same as AI. chatGPT isn’t the only AI out there btw. For example NYC just allowed the police to use AI to profile potential criminals… you think that’s a good thing?

[–] [email protected] -3 points 1 year ago (1 children)

Sounds like NYC police are the problem in that scenario.

[–] [email protected] 2 points 1 year ago (1 children)

Yeah sure “guns don’t kill people, people kill people” is an outrageous take.

[–] [email protected] -1 points 1 year ago

The take is "let's not forget to hold people accountable for the shitty things they do." AI is not a killing machine. Guns aren't particularly productive.

[–] [email protected] -3 points 1 year ago (1 children)

You chose an analogy with the most limited scope possible but sure I'll go with it. To understand how dangerous an atomic bomb is exactly without just looking up a hiroshima you need to have atleast some knowledge on the subject, you'd also have to understand all the nuances etc. The thing about AI is that most people haven't a clue what it is, how it works, what it can do. They just listen to the shit their telegram loving uncle spewed at the family gathering. A lot of people think AI is fucking sentient lmao.

[–] [email protected] 3 points 1 year ago (1 children)

I don’t think most people think ai is sentient. In my experience the people that think that are the ones who think they’re the most educated saying stuff like “neural networks are basically the same as a human brain.”

[–] [email protected] -1 points 1 year ago

You don't think, yet a software engineer from google, Blake Lemoine, thought LaMDA was sentient. He took a lot of idiots down with him when he went public with said claims. Not to mention the movies that were made with the premise of sentient AI.

Your anecdotal experience and your feelings don't in the slightest affect the reality that there is tons of people who think AI is sentient and will somehow start some fucking robo revolution.

load more comments (5 replies)
[–] [email protected] 5 points 1 year ago (1 children)

Because they live in the same society as you, and they get to decide who goes to jail as much as you do

[–] [email protected] 1 points 1 year ago (2 children)

Nice argument you made there, we don't decide who goes to jail, a judge does that, someone who studied law.

[–] [email protected] 0 points 1 year ago (1 children)

Are you familiar with juries?

[–] [email protected] -1 points 1 year ago (1 children)

No, I'm from a country where the jury all studied law and isn't 64 year old Margereta who wants to see some drama to tell at her knitting and book clubs.

[–] [email protected] 1 points 1 year ago (1 children)

There are a lot more countries than yours, believe it or not, and some of them don’t have the same justice system as yours. Do people in your country have the right to vote? Same sentiment, do you think that’s a stupid system?

[–] [email protected] -1 points 1 year ago

Your first argument can be used against you, lmao. Your second argument is a strawman. Good job.

[–] [email protected] 4 points 1 year ago (1 children)

Well and being a snob about it doesn't help. If all the average joe knows about AI is what google or openAI pushed to corporate media, that shouldn't be where the conversation ends.

[–] [email protected] -2 points 1 year ago

The average joe can have their thoughts on it all they want, but their opinions on the matter aren't really valid or of any importance. AI is best left to the people who have a deep knowledge of the subject, just as nuclear fusion is best left to scientists studying the field. I'm not going to tell average Joe the mechanic that I think the engine he just revised might just blow up, because I have no fucking clue about it. Sure I have some very basic knowledge of it, that's pretty much where it end too though.

[–] [email protected] 3 points 1 year ago

You can not know the nuanced details of something and still be (rightly) sketched out by it.

I know a decent amount about the technical implementation details, and that makes me trust its use in (what I perceive as) inappropriate contexts way less than the average layperson.

[–] [email protected] 0 points 1 year ago (5 children)

What a terrible thing to say, they're human beings so I hope they matter to you

load more comments (5 replies)