[-] [email protected] 5 points 4 days ago

I use similar feature on discord quite extensively (custom emote/sticker) and i don't feel they are just a novelty. Allows us to have inside joke / custom reaction to specific event and I really miss it when trying out open source alternatives.

[-] [email protected] 4 points 6 days ago

Too be fair to Gemini, even though it is worse than Claude and Gpt. The weird answer were caused by bad engineering and not by bad model training. They were forcing the incorporattion off the Google search results even though the base model would most likely have gotten it right.

[-] [email protected] 5 points 1 month ago

The training doesn't use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

[-] [email protected] 3 points 1 month ago* (last edited 1 month ago)

The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.

[-] [email protected] 2 points 1 month ago

Google uses their own chip for AI

[-] [email protected] 3 points 1 month ago

They know the tech is not good enough, they just dont care and want to maximise profit.

[-] [email protected] 4 points 4 months ago

Whatsapp is europe's iMessage

[-] [email protected] 4 points 5 months ago

You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.

[-] [email protected] 5 points 5 months ago

If you have good enough hardware, this is a rabbithole you could explore. https://github.com/oobabooga/text-generation-webui/

[-] [email protected] 3 points 5 months ago

Around 48gb of VRAM if you want to run it in 4bits

[-] [email protected] 2 points 5 months ago

To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.

view more: next ›

L_Acacia

joined 1 year ago