Traister101

joined 10 months ago
[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (1 children)
[–] [email protected] 3 points 2 weeks ago (1 children)

Brave is forked from Chromium so hypothetically they could maintain V2 but they'd need their own store as they currently rely on Googles

[–] [email protected] 12 points 2 weeks ago
[–] [email protected] 10 points 2 weeks ago

Yep that's why I refuse to use standard libraries. It just makes my code too complicated...

[–] [email protected] 66 points 3 weeks ago (2 children)

Yup. The moron even admitted it too!

[–] [email protected] 23 points 3 weeks ago

It's weird that you guys cannot seem to comprehend the idea of devs being paid for their work. Free stuff is great. I put my own shit online for free but you know something I don't do? Maintain an app with 100k+ downloads. Maybe the guy deserves to make some money off his hard work...

[–] [email protected] -2 points 4 weeks ago (1 children)

Wealth issue (not really shits cheap)

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

Elon pretended to lean left. He was and never has been left leaning. He's been the same old guy this entire time it's just continuing to be more and more difficult to pretend otherwise.

[–] [email protected] -2 points 1 month ago (2 children)

All cars are bad. The car you already own is less bad than a brand new car

[–] [email protected] 4 points 1 month ago (1 children)

Counterintuitive but more instructions are usually better. It enables you (but let's be honest the compiler) to be much more specific which usually have positive performance implications for minimal if any binary size. Take for example SIMD which is hyper specific math operations on large chunks of data. These instructions are extremely specific but when properly utilized have huge performance improvements.

[–] [email protected] 1 points 1 month ago (1 children)

I take it you haven't had to go through an AI chat bot for support before huh

[–] [email protected] 4 points 2 months ago (3 children)

We do know we created them. The AI people are currently freaking out about does a single thing, predict text. You can think of LLMs like a hyper advanced auto correct. The main thing that's exciting is these produce text that looks as if a human wrote it. That's all. They don't have any memory, or any persistence whatsoever. That's why we have to feed it a bunch of the previous text (context) in a "conversation" in order for it to work as convincingly as it does. It cannot and does not remember what you say

view more: next ›