this post was submitted on 14 Dec 2024
74 points (100.0% liked)

Technology

1573 readers
178 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
top 27 comments
sorted by: hot top controversial new old
[–] [email protected] 17 points 1 week ago (4 children)

I'm off the same opinion, but why does Meta give a shit?

[–] [email protected] 16 points 1 week ago (1 children)

They don’t want to pay for-profit prices?

[–] [email protected] 5 points 1 week ago

They don’t use OpenAI at all.

[–] [email protected] 13 points 1 week ago* (last edited 1 week ago) (2 children)

Meta has some heavy sins, but they’ve pushed ‘open’ ML a long time. They developed, and continue to fund PyTorch, and they basically standardized the open LLM architecture with Llama to the point literally everyone publishing weights uses it almost unmodified now, just to name two examples.

They also have a commercial interest in their open weight model ecosystem succeeding over OpenAI’s completely closed models and research. And TBH they have a good shot, as OpenAI really seems to have stagnated.

Also, Altman is a straight up con artist. Like, more than most even realize. I wouldn’t be surprised if Facebook employees hate him.

[–] MCasq_qsaCJ_234 3 points 1 week ago (1 children)

If Open AI becomes for-profit, it will have more resources to finance itself. It is currently in a similar situation to Mozilla, and it is not the first case, there have been several.

One example was Mastercard, but in the process it created a foundation with the same name and it is also very rich. Open AI will probably follow a similar path

Also, Llama is not open source according to the OSI

[–] [email protected] 0 points 1 week ago (1 children)

It’s not open source, but open weights, documented, relatively permissively licensed and all the inference/finetuning libraries for it are open source.

[–] MCasq_qsaCJ_234 2 points 1 week ago (2 children)

I understand, but Meta has the rights to Llama and at any time they can change that license to make it less open just to make more money.

Currently it is open weight to attract customers, because once there are no competitors they will start to squeeze them.

[–] [email protected] 3 points 1 week ago (1 children)

Also competition is stiff. Alibaba is currently handing their butts to them with Qwen 2.5. Deepseek (a Chinese startup), tencent and Mistral (French) are giving them a run for their money too, and there are even some that “continue train” their old weights.

[–] MCasq_qsaCJ_234 1 points 1 week ago (1 children)

And what are those examples of those who continue training old weights?

[–] [email protected] 1 points 1 week ago (1 children)

A small startup called Arcee AI actually “distilled” logits from several other models (Llama, Mistral) and used the data to continue train Qwen 2.5 14B (which itself is Apache 2.0). It’s called supernova medius, and it’s quite incredible for a 14B model… SOTA as far as I know, even with their meager GPU resources.

A company called upstage “expands” models to larger parameter counts by continue training them. Look up the SOLAR series.

And quite notably, Nvidia continue trained Llama 3.1 70B and published the weights as Nemotron 70B. It was the best 70B model for awhile, and may still be in some areas.

And some companies like Cohere continuously train the same model slowly, and offer it over API, but occasionally publish the weights to promote them.

[–] MCasq_qsaCJ_234 1 points 1 week ago (1 children)

The fact that there is AI with open source licenses is already a good thing, as is the competition. Although in my opinion it is not enough because it can further consolidate oligopolies in this sector.

Trying to prevent OpenAI from becoming a for-profit seems to me to be a questionable tactic. It's as if Mozilla wanted to be a for-profit company in order to make Firefox more competitive with Chrome, but Google opposes this measure.

[–] [email protected] 1 points 1 week ago (1 children)

Well for one, I directly disagree with Altman’s fundamental proposition, they don’t need to “scale” AI so dramatically to make it better.

See: Qwen 2.5 from Alibaba, a fraction of the size, made with a tiny fraction of the H100 GPUs and highly competitive (and (mostly) Apache licensed). And frankly, OpenAI is pointedly ignoring all sorts of open research that could make their models notably better or more powerful efficient, even with the vast resources and prestige they have… they seem most interested in anticompetitive efforts to regulate competitors that would make them look bad, using the spectre of actual AGI (which has nothing to do with transformers LLMs) to scare people.

Even if doing it for the wrong reasons, I feel like Google would be right to oppose Mozilla axing the nonprofit division if they were somehow in a similar position to OpenAI. Their mission of producing a better, safer browser would basically be lying through their teeth.

[–] MCasq_qsaCJ_234 1 points 1 week ago

Open AI has different priorities they want to achieve AGI, so they seek to explore the capabilities of AI not look at what competency does in those directions to replicate and/or improve it. They only optimize it to make their services faster and less resource consuming.

Also, becoming a for-profit organization doesn't mean you eliminate your non-profit division. Those two parts separate and become independent, although the nonprofit ends up getting considerable funds from the funding offer received by the other part.

As is the case with Mastercard, whose nonprofit organization is one of the richest in the world. In that scenario Mozilla would split into two entities one would focus on making a profit and making Firefox more competitive, while the other would focus on what Mozilla currently does.

[–] [email protected] 1 points 1 week ago (1 children)

No, they can’t, because you can just pull the git repo with the old license as use them as they were at the time of upload, just like any software on a git repository. And too many people have them downloaded to delete them from the internet.

There are also finetunes inheriting the old license, and those orga are not going to pull the weights.

[–] MCasq_qsaCJ_234 2 points 1 week ago (1 children)

And in that case, will the Llama fork be the same as the Meta fork? We are talking about AI that has a considerable development, companies would probably not participate because it is not an open source license and its clause limits in those aspects.

Also you have to think that if the new version of Llama with the new license is 3 times better than Llama with the previous license, do you really think that the community will continue to develop the previous version?

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

And in that case, will the Llama fork be the same as the Meta fork? We are talking about AI that has a considerable development, companies would probably not participate because it is not an open source license and its clause limits in those aspects.

Llama has tons of commercial use even with its “non open” license, which is basically just a middle finger to companies the size of Google or Microsoft. And yes, companies would keep using the old weights like nothing changed… because nothing did. Just like they keep using open source software that goes through drama.

Also you have to think that if the new version of Llama with the new license is 3 times better than Llama with the previous license, do you really think that the community will continue to develop the previous version?

Honestly I have zero short term worries about this because the space is so fiercely competitive. If Llama 3 was completely closed, the open ecosystem would have been totally fine without Meta.

Also much of the ecosystem (like huggingface and inference libraries) is open source and out of their control.

And if they go API only, they will just get clobbered by Google, Claude, Deepseek or whomever.

In the longer term… these transformers style models will be obsolete anyway. Honestly I have no idea what the landscape will look like.

[–] MCasq_qsaCJ_234 1 points 1 week ago (1 children)

Well, I agree that we don't know what the situation will look like over time.

There may be a limit that will cause another AI winter, driving companies away for a while because they invested money and received little.

Transformers may remain relevant or end up being obsolete, although there are still many papers related to AI in one way or another.

[–] [email protected] 1 points 1 week ago (1 children)

The limit is already (apparently) starting to be data… and capital, lol.

There could be a big computer breakthrough like , say, fast bitnet training that makes the multimodal approach much easier to train though.

[–] MCasq_qsaCJ_234 1 points 1 week ago

I think this method is not convincing for companies, because they prefer more power and to do it on their own, because they don't want their ideas to be replicated.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (1 children)

Can you say all that but with synonyms of the words "open" and "close" where they aren't part of names? 😵‍💫

[–] [email protected] 4 points 1 week ago

Call them ClosedAI like the community does, and it’s much easier, lol.

[–] Brain 4 points 1 week ago

They have Meta AI baked into their apps now and are building a 10 Billion dollar AI facility in Louisiana that they want to use Nuclear Power to run.

So I'm guessing the plan is to hurt the competition that is already ahead of them and hopefully get enough time to buy them out or pass them.

[–] [email protected] 2 points 1 week ago

We the consumer protection we get in the US is the zuck

[–] [email protected] 8 points 1 week ago

You know things are bad when Zuckerberg is looking like the good guy.

[–] possiblylinux127 3 points 1 week ago (1 children)
[–] MCasq_qsaCJ_234 6 points 1 week ago (1 children)

It seems normal to me that a company takes questionable actions to avoid more competition.

[–] possiblylinux127 1 points 1 week ago

It seems fit for them to get burned because of it