this post was submitted on 17 Jul 2024
370 points (99.2% liked)

Open Source

30995 readers
473 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 3 months ago (1 children)
[–] [email protected] 3 points 3 months ago (1 children)

It was struggling harder than I was ;-)

[–] [email protected] 8 points 3 months ago* (last edited 3 months ago) (2 children)

I noticed those language models don't work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.

They are useful as a TLDR but shouldn't be taken as fact, at least not yet and for the foreseeable future.

A bit off topic, but I've read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That's why it's so important to mark ~~AI~~ LLM generated texts (which the TLDR bots do).

[–] [email protected] 5 points 3 months ago

Not calling ML and LLM "AI" would also help. (I went offtopic even more)

[–] [email protected] 3 points 3 months ago (1 children)

I think the Internet would benefit a lot, if peope would mark their Informations with sources!

  • source my brain
[–] [email protected] 2 points 3 months ago

Yeah that's right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.