[-] [email protected] 2 points 5 months ago

To run this model locally at gpt4 writing speed you need at least 2 x 3090 or 2 x 7900xtx. VRAM is the limiting factor in 99% of cases for interference. You could try a smaller model like mistral-instruct or SOLAR with your hardware though.

[-] [email protected] 13 points 6 months ago

I put zorin on my parent's computer 2 years ago, while its a great distro, their windows app support is just marketing, its an out of date wine version with an unmaintained launcher. Worse than tinkering with wine yourself.

[-] [email protected] 7 points 7 months ago

It is already here, half of the article thumbnails are already AI generated.

[-] [email protected] 2 points 7 months ago

It works with plugin juste like obsidian, so if their implémentation is not gold enough, you can always find a gramarly plugin.

[-] [email protected] 5 points 7 months ago

It does not work exactly like obsidian as it is an outliner. I use both on the same vault and logseq is slower on larger vault.

[-] [email protected] 1 points 7 months ago

Do you use comfyui ?

[-] [email protected] 24 points 7 months ago

You are easier to track with Adnauseum

[-] [email protected] 6 points 8 months ago

Being able to run benchmarks doesn't make it is a great experience to use unfortunately. 3/4 of applications don't run or have bugs that the devs don't want to fix.

[-] [email protected] 7 points 8 months ago

Windows is not fine with ARM, which can be a turnoff for some.

[-] [email protected] 9 points 8 months ago* (last edited 8 months ago)

Llama models tuned for conversation are pretty good at it. ChatGPT also was before getting nerfed a million time.

[-] [email protected] 1 points 9 months ago

Even dumber than that, when their activation method fail, the support uses massgrev to install windows on costumer pc

view more: ‹ prev next ›

L_Acacia

joined 1 year ago