HighlyRegardedArtist

joined 3 weeks ago
[–] [email protected] 2 points 1 hour ago

You can play with words all you like, but that's not going to change the fact that LLMs fail at reasoning. See this Wired article, for example.

[–] [email protected] 1 points 1 hour ago (1 children)

I have to disagree with that. To quote the comment I replied to:

AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued

Where's the "turn of phrase" in this, lol? It could hardly read any more clearly that they assume this "AI" can "figure" stuff out, which is simply false for LLMs. I'm not trying to attack anyone here, but spreading misinformation is not ok.

[–] [email protected] 13 points 5 hours ago (6 children)

Or, hear me out, there was NO figuring of any kind, just some magic LLM autocomplete bullshit. How hard is this to understand?

[–] [email protected] 2 points 8 hours ago

If the reason for you wanting to avoid bitlocker is incompatibility with linux, you might want to reconsider. It's been many years since I had drives with bitlocker+ntfs, but they worked reasonably well back then with dislocker, so perhaps check that out before considering alternatives.

[–] [email protected] -1 points 2 days ago (3 children)

You might want to remember that he has done more to advance open source software than perhaps any other person on this planet. You don't get to take away someone's achievements just because you don't like them...

[–] [email protected] 2 points 5 days ago (1 children)

You can use LUKS for something like this too by mounting a file through a loop device and then using it like any other disk/filesystem. For more details, see: https://wiki.archlinux.org/title/Dm-crypt/Encrypting_a_non-root_file_system#File_container

[–] [email protected] 29 points 5 days ago

Analysts: "Is this 'car' in the room with us right now?"

[–] [email protected] 151 points 5 days ago (12 children)

Musk: "I have concepts of a car."

[–] [email protected] 3 points 5 days ago

I'd put my money on that account being hacked/sold and gaining a new life as a bot in some disinformation network ready to spew bias and bullshit when the time and topic is right. There's no other way to explain the comment history before 9 months ago, then a long silence, and then a restart just a few weeks ago with a complete change in character.

[–] [email protected] 19 points 1 week ago

Idealistically and realistically: Totally and absolutely cool. If anything, they have a moral imperative to keep the project going, since there are users that depend on it, and doing that requires money. As such, people will need to be informed of how to contribute, so a pop up doing just that is a good way to achieve this. Why would this not be ok, even idealistically?

[–] [email protected] 1 points 1 week ago (1 children)

Perhaps LLMs can be used to gain some working vocabulary in a subject you aren't familiar with. I'd say anything more than that is a gamble, since there's no guarantee that hallucinations have not taken place. Remember, that to spot incorrect info, you need to already be well acquainted with the matter at hand, which is at the polar opposite of just starting to learn the basics.

[–] [email protected] 1 points 1 week ago (3 children)

You do realize that a very thorough manual is but a man bash away? Perhaps it's not the most accessible source available, but it makes up for that in completeness.

view more: next ›