this post was submitted on 02 Oct 2023
27 points (96.6% liked)

LocalLLaMA

2266 readers
4 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I'll either make a new one once in awhile or I'll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

you are viewing a single comment's thread
view the rest of the comments
[–] SuperSpruce 3 points 7 months ago (4 children)

Late to the party, I never got FOSAI working until I found LMStudio, but I have 2 questions:

  1. Is there any way I could utilize my GPU, a Radeon RX6800M (12GB VRAM)? I got Mistral-7B doing 5 tokens/s but it's all running on the CPU.

  2. Is there any model specifically for programming questions? This could be of immense help to my projects without having to ask ChatGPT.

[–] [email protected] 1 points 4 weeks ago (2 children)

I got a question about LMStudio! Is it FOSS, or is it just partly open?

On their website I see that they do have a github link, but I can't identify the "main" project.

[–] SuperSpruce 1 points 4 weeks ago (1 children)

Looks like LMStudio is FOSS although I'm not 100% sure. What if does is allow you to run FOSAI models locally.

[–] [email protected] 1 points 4 weeks ago

Yeah, that I understand. I was just curious, since currently I'm using ollama, which is fully FOSS, and some web UI to work with the LLMs in chat. but having it all in one place would be really nice.

I've heard some good things about LMStudio, but if it's not FOSS, it's not getting on my machine.

load more comments (1 replies)