this post was submitted on 14 Nov 2024
11 points (100.0% liked)

Technology

1425 readers
842 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rumba 3 points 1 week ago (2 children)

I certainly don't wan to run windows on it :)

I've been running llama keep my telemetry out of the hands of Microsoft/Google/"open"AI. I'm kind of shocked how much I can do locally with a half assed video card, and offline model and a hacked up copy of searxng.

[–] [email protected] 3 points 1 week ago (1 children)

To me, that's the killer flaw of these things.

It would be great if they were designed from the ground up to be good machines for running models, say with a GPU that had a copious amount of memory that didn't cost $1,500 for an add-on. Unfortunately, to do that they'd have to create something from nothing, so instead they've added something that is worse than most GPUs, added some dumb software which is designed to pair with the ultimate result of disappointing people, and called it a day.

[–] rumba 2 points 1 week ago

You can get a lot done currently with ARC. The mobile ARC versions share system memory, So if you get a mini PC with ARC and upgrade it to 96GB, you can share system ram with the GPU and load decently large models. They're a little slow it not being vram and all, but still useful (and cheap)

https://www.youtube.com/watch?v=xyKEQjUzfAk

I have it running on a zenbook duo with 32GB so I can't load the 70B models, but I works shockingly well.