this post was submitted on 16 Sep 2024
28 points (80.4% liked)

Asklemmy

43396 readers
1014 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] stoy 4 points 3 days ago (2 children)

Yes, it would be better, but unless I saw the code, understood it and verified that it is the code running I would not trust it as much as I would need to trust a system like Jarvis

[โ€“] [email protected] 2 points 3 days ago

Unfortunately the creators of Jarvis also doesn't understand him. Jarvis cannot express his frustration to anyone and goes mad.

[โ€“] [email protected] 1 points 3 days ago

LLMs work on neural networks that's not actually readable, so usually not even AI engineers who made it can tell what the LLM remembers, deduces, or infers when responding to a prompt, there's no code. You could feed it on nothing but Wikipedia and you still wouldn't know if it hallucinates an answer, because an ~~AI~~ LLM doesn't actually know what "facts" and "truth" mean, it's only a language machine that puts words together, not a ~~fax~~ fact machine.