this post was submitted on 11 Sep 2023
9 points (100.0% liked)
LocalLLaMA
2244 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That is correct behaviour. At some point it'll decide this is the text you requested and follow it up with an EOS token. You either need to suppress that token and force it to generate endlessly. (With your
--unbantoken
you activate that EOS token and this behaviour.) Or manually add something and hit 'generate again. For example just a line break after the text often does the trick for me.I can take a screenshot tomorrow.
Edit: Also your rope config doesn't seem correct for a superHOT model. And your prompt from the screenshot isn't what I'd expect when dealing with a WizardLM model. I'll see if I can reproduce your issues and write a few more words tomorrow.
Edit2: Notes:
--contextsize 8192 --ropeconfig 0.25 10000
--unbantokens
if you don't want it to stopA chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.USER: Who are you? ASSISTANT: I am WizardLM.......
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n\n\n### Input:\n\n\n### Response:\n
The chosen model is kind of badly documented. And a bit older. I'm not sure if it's the best choice.
Edit3: I've put this in better words and made another comment including screenshots and my workflow.
Yeah, I think you need to set the
contextsize
andropeconfig
. Documentation isn't completely clear and in some places sort of implies that it should be autodetected based on the model when using a recent version, but the first thing I would try is setting these explicitly as this definitely looks like an encoding issue.