this post was submitted on 02 Aug 2023
14 points (93.8% liked)
LocalLLaMA
2274 readers
2 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No experience, but just adding that long context models have a tendency of 'forgetting' whats in the middle of the text. Worth noting if you work on long texts I assume. I can't remember the paper tho. There's so many..
Lost in the middle: https://arxiv.org/abs/2307.03172
Happens for all models, not just Llama and it is really frustrating to deal with.
But is that a bug or a feature? I think it is plausible that relevant information is most likely either at the beginning of a document or in the previous few lines. So that is where attention should be focused.
Like when you get an assignment, the important instructions are at the beginning and not somewhere in the middle. And when writing a document or a book, the most important thing is your current sentence fits in with that paragraph. At that point you don't worry about remembering exactly what the hobbits did back in the Shire.
I remember reading some criticism on that paper. But i cannot comment on the technical aspects.
You raise an interesting point though in that most examples likely follow exactly as you suggest, there would have to be large amounts of training specifically for focusing on middle content, there probably just isn't enough in the dataset
I'm my application (summarising excerpts from several papers) it is a bug. I had assumed the context would be given equal weight throughout, but the distribution of information in the generated summaries suggests it is following the lost in the middle shape. This is most evident when the early chunks of text say something contradicted by the middle. I'd expect the models to talk about the contradiction at least, but it hasn't been mentioned in any that I've looked at.
I can see what you mean, when generating text you need to pay most attention to what you just wrote, but you also don't want to claim the hobbits started out in Mordor. I have no idea how to mitigate it, other than making the context short enough that it is all 'remembered'.
If you remember where you read some criticism, I'd be very grateful for a link. That paper is doing a lot of heavy lifting in how I understand what I'm seeing, so it would be good to know where the holes in it are.
Sorry, didn't find it. If i remember correctly it was either for using models where the foundation model was trained to fewer (2048?) tokens. Or for the measurement/benchmark being too 'synthetic' / not meaningful for real-world scenarios or something.
I read this: https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/ (And maybe also related to this topic: https://arize.com/blog/lost-in-the-middle-how-language-models-use-long-contexts-paper-reading/ and https://github.com/THUDM/LongBench )
Also: I've played around a bit with llama. I haven't had good results with summarizing things whatsoever. Maybe it's not the context length, but the wrong model for the task? Aren't there other language models out there, specifically suited for the task of summarization? Llama is kind of generalist and maybe just not exceptionally good at this specific task.
https://huggingface.co/learn/nlp-course/chapter7/5?fw=tf#models-for-text-summarization and https://www.width.ai/post/bart-text-summarization
Regarding the original question: I'm not sure whether KoboldCPP does it correctly for the newer 4k context length. For me it says
Using automatic RoPE scaling (scale:1.000, base:32000.0)
But is that the correct base value? That's the same as if i were using an LLaMA1 model with artificially increased context length.You are supposed to manually set scale to 1.0 and base to 10000 when using llama 2 with 4096 context. The automatic scaling assumes the model was trained for 2048. Though as I say in the OP, that still doesn't work, at least with this particular fine tune.
I was unaware that the smaller context models exhibited the same effect. It does seem logical that broad important information and conclusions is naturally put at the ends of a sentence by us. I haven't read the paper yet, but wonder if the training set - our communication - also contains more information at the ends, so the effect isn't caused by the algorithm, but by the data. I'll give the paper a read, thx..