this post was submitted on 26 Feb 2024
68 points (84.7% liked)

Technology

58012 readers
3173 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI ‘dream girls’ are coming for porn stars’ jobs::undefined

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 6 months ago* (last edited 6 months ago)

Yes. The Llama 70B derived models, as well as Mixtral 8x7B and the new Mistral Medium 70B are competitive with ChatGPT 3.5. Most of them can do 16,000 token context similar to ChatGPT as well.

You only NEED 40GB of free RAM to run them at decent quality, but it's slow.

With a 24GB GPU like a 3090 or 4090 you can run them at a reasonable speed with partial GPU offload. About 1-2 words per second. I run 70Bs in this manner on my computer.

With two 24GB GPUs you can run them very fast, like ChatGPT.


There's of course a whole world in between as well, but those are the rough hardware requirements to match ChatGPT in a self-hosted sort of way. There's also a new thing people are doing where they add layers from one model onto another one, like a merge but keeping >50% of the original layers from each model. "Goliath 120B" and the like, which is made from 2 different 70Bs. They're even better but it's a bit beyond reasonable consumer hardware for now.