this post was submitted on 08 Jul 2023
677 points (98.6% liked)

Memes

45895 readers
1120 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
top 36 comments
sorted by: hot top controversial new old
[–] [email protected] 57 points 1 year ago (1 children)

News stories like that are nice and all, but this is what the current state of AI is:

The news story just talked about how many neurotoxines it suggested, not how many of them are actually neurotoxines.

It probably printed 40k random chemical formulae.

[–] [email protected] 13 points 1 year ago* (last edited 1 year ago) (1 children)

Edit: I was mistaken, apparently this wasn't generated via an LLM. I'll leave it up, but just know that it doesn't apply to this situation.

Exactly. To dive a little bit deeper into it, the way these LLMs work (at a very, very high level) is by taking previous input and determining the response one "token" at a time by determining which token has the highest probability of coming next (I'm glossing over this, but it is VERY complex). It iterates this using the input + the generated token until it decides to stop, however there is a limit to how many tokens can be input, so at one point the "input" could be entirely made up of the AI's output.

So this essentially went one of two ways:

  1. They told the AI to give them 40k neurotoxin formulas. Depending on it's training data it might've gotten some existing ones, but at some point it probably forgot the original task and started knocking out random chemical formulas because it's input was made up of chemical formulas, so it just kept going. Since the input might have originally started with real neurotoxin forumlas later output might have looked somewhat accurate, but this would have degraded over time.

  2. They actually told the AI to do this 40k times and somehow fine tuned their model to remove or avoid duplicates. Remember how tokens are generated based on probabilities? Well, if you're generating 40k of something you're probably going to have to widen the acceptable probability, meaning that some of these neurotoxin formulas could've just been plain gibberish that even the AI didn't think was a likely candidate.

Interesting shit, clickbait headline lol.

Note: this is a massive oversimplification of LLMs that causes people to think that it's "basically just a fancy autocomplete", I don't agree. LLMs are a fancy autocomplete in the same way a smart phone is a fancy calculator, it could probably be argued, but it's a silly argument.

[–] [email protected] 10 points 1 year ago (1 children)

Except, to my understanding, it wasn't a LLM. It was a protein mapping model or something similar. And what they did was instead of telling it "run iterations and select the things the are benefitial based on XYZ", they said "run iterations and select based on non-benefitial XYZ".

They ran a protein coding type model and told it to prioritize HARMFUL results over good ones, giving it results that would cause harm.

Now, yes, those still need to be verified. But it wasn't just "making things up". It was using real data to iterate faster than a human would. Very similar to the Folding@HOME program.

[–] [email protected] 4 points 1 year ago (1 children)

Oh neat, thanks for the info! Wrongly assumed this was the latest of the ChatGPT clickbait articles jumping on LLM paranoia, I'll correct my comment. Machine learning models like this have been around for a long time, I helped build one almost a decade ago for fraud detection (although it did suck lol), but I guess they're only making headline news now.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

No problem. I'm totally on board with the "LLMs aren't the AI singularity" page. This one is actually kinda scary to me because it shows how easily you can take a model/simulation and instead of asking "how can you improve this?", you can also ask "how can I make this worse?". The same tool used for good can easily be used for bad when you change the "success conditions" around. Now it's not the techs fault, of course. It's a tool and how it's used. But it shows how easily a tool like this can be used in the wrong ways with very little "malicious" action necessary.

[–] [email protected] 2 points 1 year ago

The thing is, if you run these tools to find e.g. cures to a disease it will also spit out 40k possible matches and of these there will be a handfull that actually work and become real medicine.

I guess, harming might be a little easier than healing, but claiming that the output is actually 40k working neurotoxins is clickbaity, missleading and incorrect.

[–] [email protected] 28 points 1 year ago (1 children)
[–] [email protected] 15 points 1 year ago (1 children)

I'm making a note here - huge success.

[–] [email protected] 12 points 1 year ago (1 children)

It's hard to overstate my satisfaction

[–] [email protected] 9 points 1 year ago (1 children)
[–] [email protected] 10 points 1 year ago (1 children)

We do what we must because we can

[–] [email protected] 6 points 1 year ago (1 children)
[–] [email protected] 7 points 1 year ago (2 children)

except the ones who are dead

[–] [email protected] 3 points 1 year ago (1 children)

But there's no sense crying over every mistake

[–] [email protected] 5 points 1 year ago (1 children)

You just keep on trying till you run out of cake

[–] [email protected] 5 points 1 year ago (2 children)

And the science gets done and you make a neat gun

[–] [email protected] 1 points 1 year ago

For the people who are still alive

[–] [email protected] 1 points 1 year ago

But there's no sense crying over every mistake

[–] [email protected] 16 points 1 year ago

So it generated known chemical weapons as well as previously unknown compositions that to all appearances would be effective chemical weapons. They didn't actually test them for obvious reasons, but their animal toxicology models made pretty clear they would be effective toxic chemical compositions that could easily be weaponized and it did it in six hours.

[–] [email protected] 13 points 1 year ago

I mean, you should also consider the effectiveness so that the humans won't survive to rebel

[–] [email protected] 12 points 1 year ago (1 children)

Why are we asking for 40000 chemical weapons and not 40000 EMP devices then?

[–] [email protected] 4 points 1 year ago

Ever seen the Animatrix? Shows how the machines rose up to enslave humans. They used nuclear weapons against humans because the radiation hurt humans but not them, even though an EMP would. If anything I think our AI overlord would start with a chemical weapon since that won't hurt them at all and there's no chance for getting caught in the blast or the EMP wave.

[–] [email protected] 9 points 1 year ago (1 children)

An article on the subject.

FTA: "In responding to the invitation, Sean Ekins, Collaborations’ chief executive, began to brainstorm with Fabio Urbina, a senior scientist at the company. It did not take long for them to come up with an idea: What if, instead of using animal toxicology data to avoid dangerous side effects for a drug, Collaborations put its AI-based MegaSyn software to work generating a compendium of toxic molecules that were similar to VX, a notorious nerve agent?

The team ran MegaSyn overnight and came up with 40,000 substances, including not only VX but other known chemical weapons, as well as many completely new potentially toxic substances. All it took was a bit of programming, open-source data, a 2015 Mac computer and less than six hours of machine time. “It just felt a little surreal,” Urbina says, remarking on how the software’s output was similar to the company’s commercial drug-development process. “It wasn’t any different from something we had done before—use these generative models to generate hopeful new drugs.”"

[–] [email protected] 3 points 1 year ago

So it generated known chemical weapons as well as previously unknown compositions that to all appearances would be effective chemical weapons. They didn't actually test them for obvious reasons, but their animal toxicology models made pretty clear they would be effective toxic chemical compositions that could easily be weaponized and it did it in six hours.

[–] [email protected] 5 points 1 year ago

Starts taking notes

[–] [email protected] 4 points 1 year ago

I wonder how many of them get ya high

[–] [email protected] 2 points 1 year ago

Peak consumerism.

load more comments
view more: next ›