lily33

joined 1 year ago
[–] [email protected] -3 points 1 year ago (1 children)

If you give me several paragraphs instead of a single sentence, do you still think it's impossible to tell?

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

It's not it's biological origins that make it hard to understand the brain, but the complexity. For example, we understand how the heart works pretty well.

While LLMs are nowhere near as complex as a brain, they're complex enough to make it extremely difficult to understand.

But then there comes the question: if they're so difficult to understand, how did people make them in the first place?

The way they did it actually bears some similarities to evolution. They created an "empty" model - a large neural network that wasn't doing anything useful or meaningful. But it depended on billions of parameters, and if you tweak a parameter, its behavior changes slightly.

Then they expended enormous amount of computing power tweaking parameters, each tweak slightly improving its ability to model language. While doing this, they didn't know what each number meant. They didn't know how or why each tweak was improving the model. Just that each tweak was making an improvement.

Unlike evolution, each tweak isn't random. There's an algorithm called back-propagation that can tell you how to tweak the neural network to make it predict some known data slightly better. But unfortunately it doesn't tell you anything about the "why" this tweak is good, or "what" each parameter change means. Hence why we don't understand how LLMs work.

One final clarification: we do have some understanding on high level - just like we have some understanding of how a brain works. We have much better understanding of LLMs than brains, of course, but we can't really explain either.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

It's not that nobody took the time to understand. Researchers have been trying to "un-blackbox" neural networks pretty much since those have been around. It's just an extremely complex problem.

Logistic regression (which is like a neural network but with just one node) is pretty well understood - but even then sometimes it can learn some pretty unintuitive coefficients and it can be tricky to understand why.

With LLMs - which are enormous by comparison - it's simply not a tractable problem to understand how it works in detail.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (5 children)

I don't see how that affects my point.

  • Today's AI detector can't tell apart the output of today's LLM.
  • Future AI detector WILL be able to tell apart the output of today's LLM.
  • Of course, future AI detector won't be able to tell apart the output of future LLM.

So at any point in time, only recent text could be "contaminated". The claim that "all text after 2023 is forever contaminated" just isn't true. Researchers would simply have to be a bit more careful including it.

[–] [email protected] 5 points 1 year ago (10 children)

Not really. If it's truly impossible to tell the text apart, than it doesn't really pose a problem for training AI. Otherwise, next-gen AI will be able to tell apart text generated by current gen AI, and it will get filtered out. So only the most recent data will have unfiltered shitty AI-generated stuff, but they don't train AI on super-recent text anyway.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Language models actually do learn things in the sense that: the information encoded in the training model isn't usually* taken directly from the training data; instead, it's information that describes the training data, but is new. That's why it can generate text that's never appeared in the data.

  • the bigger models seem to remember some of the data and can reproduce it verbatim; but that's not really the goal.
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

It's specifically distribution of the work or derivatives that copyright prevents.

So you could make an argument that an LLM that's memorized the book and can reproduce (parts of) it upon request is infringing. But one that's merely trained on the book, but hasn't memorized it, should be fine.

[–] [email protected] -3 points 1 year ago (3 children)

Why should such a think be assumed????

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

It's actually a real problem on reddit where people spin up fake users to manipulate votes. Reddit hasn't published how they detect that exactly, but one way to do that is to look for bad voting patters, like if one account systematically upvotes/downvotes another. But you pretty much can't without knowing the votes.

[–] [email protected] 3 points 1 year ago (2 children)

True - but it'll be much easier to detect.

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago)

That last point is completely impossible. Don't forget that I don't have to run the official lemmy software on my instance. I can make changes: for example, I can add a feature to my instance like "log every post in a separate, local database before deleting it from lemmy". Nobody else but me will know this feature exists. Or (to be AGPL compliant) have a separate tool to regularly back up my lemmy database, undoing deletions.

As for the second point: I'd say making local votes private and non-local public will be worse for privacy due to causing confusion.

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (3 children)

I'd go the other way: make these things officially public, so people know they are, and then aren't taken by surprise.

Private voting can be tricky in a federated setting, because I could have a malicious instance that boosts my posts (I can have it with public votes too, but then it's easier to detect). Truly private posting history is outright impossible, as you said, due to crawlers.

The way to privacy is to make sure not to dox your account, perhaps alternate 2-3 accounts if it's really important to you.

21
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I'm looking for an open-source alternative to ChatGPT which is community-driven. I have seen some open-source large language models, but they're usually still made by some organizations and published after the fact. Instead, I'm looking for one where anyone can participate: discuss ideas on how to improve the model, write code, or donate computational resources to build it. Is there such a project?

view more: next ›