Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
So you already have research showing that GPT LLMs are capable of modeling aspects of training data at much deeper levels of abstraction than simply surface statistics of words and research showing that the most advanced models are already generating novel and new outputs distinct from anything that would be in the training data by virtue of the complexity of the number of different abstract concepts it combines from what was learned in the training data.
Like - have you actually read any of the ongoing actual research on the field at all? Or just articles written by embittered people who are generally misunderstanding the technology (for example, if you ever see someone refer to them as Markov chains, that person has no idea what they are talking about given the key factor of the transformer model is the self-attention mechanism which negates the Markov property characterizing Markov chains in the first place).