this post was submitted on 24 Jun 2024
30 points (100.0% liked)
technology
23328 readers
114 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah I just don't see how it's really any different from a human in that respect
Humans are capable of metacognition: having levels of confidence about the accuracy of their beliefs. They are also capable of communicating this uncertainty, usually through tone & phrasing.
I suspect that arises from a sort of adversarial or autoregressive interplay btw areas of the brain. I do observe early teens displaying very low metacognition around accuracy of what they say. It's a true stereotype that they will pick an argument almost arbitrarily and parrot talking points from online. I imagine that if llms can do that, they might just need an RLHF training flow that mirrors stuff like arguing for BS with your parents or experiencing failure as a result of misinformation. That's why I think it's a matter of instruction fine-tuning rather than some fundamental attribute of LLMs.
It's probably part of developmental instincts in humans to develop better metacognition by going through an argumentative phase like that.