this post was submitted on 12 Jul 2023
25 points (85.7% liked)
Singularity | Artificial Intelligence (ai), Technology & Futurology
20 readers
1 users here now
About:
This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.
Rules:
- Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
- No Low-quality/Wildly Speculative Posts.
- Keep posts on topic.
- Don't make posts with link/s to paywalled articles as their main focus.
- No posts linking to reddit posts.
- Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
- Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
- Please be respectful to each other.
- No summaries made by LLMs. I would like to keep quality of comments as high as possible.
- (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
- No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
- If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
- Please be respectful to each other.
Related sublemmies:
[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)
Note:
My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You have no idea what you're talking about. AI is a black box right now, we understand how it works but we can't properly control it and it still does a lot of unintentional behavior, like how chatbots can sometimes be aggressive or insult you. Chatbots like GPT try getting around this by having a million filters but the point is that the underlying AI doesn't behave properly. Mix that with superintelligence and you can have an AI that does random things based on what it felt like doing. This is dangerous. We're not asking to stop AI development, we're asking to do it more responsibly and follow proper AI ethics which a lot of companies seem to start ignoring in favor of pushing out products faster.
And then you say:
Like if there was a way to control it?
Also:
So you are saying that AI is being pushed without any testing?
I have a pretty solid opinion of Eliezer Yudkowsky. I've read material that he's written in the past, and he's not bullshitting in that; it's well-thought through.
I haven't watched the current video, but from what I've read from him in the past, Yudkowsky isn't an opponent of developing AI. He's pointing out that there are serious risks that need addressing.
It's not as if there are two camps regarding AI, one "everything is perfect" utopian and the other Luddite and "we should avoid AI".
EDIT: Okay, I went through the video. That's certainly a lot blunter than he normally is. He's advocating for a global ban on developing specifically superintelligent AI until we do have consensus on dealing with it and monitoring AI development in the meantime; he's talking about countries being willing to go to war with countries that are developing them, so his answer would be "if Iran is working on a superintelligent AI, you bomb them preemptively".
EDIT2:
The major point that Yudkowsky has raised in his past work is that it is likely quite difficult to constrain what AI can do.
Just because we developed an AI does not mean that it is trivial for us to place constraints on it that will hold as it evolves, as we will not be able to understand the systems that we will be trying to constrain.
Last week, lemmy had a serious security exploit involving cross-site scripting. The authors of that software wrote (or at least committed) the code in question. Sure, in theory, if they had perfect understanding of all of the implications of every action that they took, they would not have introduced that security hole -- but they didn't. Just being the author doesn't mean that the software necessarily does what they intend, because even today, translating intent to functionality is not easy.
A self-improving AI is going to be something that we will be very far-removed from in terms of how it ultimately winds up operating; it will be much more-complex than a human is.
Programmers do create, say, software that has bugs. An infinite loop, or software that allocates all memory on a computer today. The systems today (mostly) operated in constrained environments, where they are easy to kill off. If you look at, say, DARPA's autonomous vehicles challenges, where that is not the case, the robots are required to have an emergency stop button that permits them to be killed remotely in case they start doing something dangerous.
But a superintelligent AI would likely not be something that is easy to contain or constrain. If it decides that an emergency stop button is in conflict with its own goals and understands that emergency stop button, it is not at all clear that we have the ability to keep it from defeating such a mechanism -- or to keep it from manipulating us into doing so. And the damage that a self-replicating/self-improving AI could potentially do is at least potentially much greater than what an DARPA-style out-of-control autonomous armored vehicle could do. The vehicle might run over a few dozen people before it runs out of fuel, but its nature limits degree to which it can go wrong.
We didn't have an easy time purging the Morris Internet Worm back in 1988, because our immediate responses, cutting the links that sites had to the Internet to block more instances of the worm from hitting their systems from the Internet, crippled our own infrastructure. That took mailing lists offline and took down Usenet and finger -- which was used by sysadmins to communicate network status and to find out how to contact other people via the phone system -- and that was a simple worm in an era much-less dependent on the Internet. It wasn't self-improving or intelligent, and its author even tried -- without much success, as we'd already had a lot of infrastructure go down -- to tell people how to disable it some hours after it started taking the Internet out.
I am not terribly sanguine on our ability to effectively deal with a system that is that plus a whole lot more.
I'll also add that I'm not actually sure that Yudkowsky's suggestion in the video -- monitoring labs with massive GPU arrays -- would be sufficient if one starts talking about self-improving intelligence. I am quite skeptical that the kind of parallel compute capacity used today is truly necessary to do the kinds of tasks that we're doing -- rather, it's because we are doing things inefficiently because we do not yet understand how to do them efficiently. True, your brain works in parallel, but it is also vastly slower -- your brain's neurons run at maybe 100 or 200 Hz, whereas our computer systems run with GHz clocks. I would bet that if it were used with the proper software today, if we had figured out the software side, a CPU on a PC today could act as a human does.
Alan Turing predicted in 1950 that we'd have the hardware to have human-level in about 2000.
That's ~1GB to ~1PB of storage capacity, which he considered to be the limiting factor.
He was about right in terms of where we'd be with hardware, though we still don't have the software side figured out yet.
They are already most powerful military in the world. What changes?
Lmao do we have an equivalent of /r/confidentlyincorrect on Lemmy?
Yudkowsky has a background in this, purely aside from likely being smarter than any five of us put together. Do let us all know how you're qualified to call him a student, let alone a brainless one.