this post was submitted on 14 Dec 2024
74 points (100.0% liked)

Technology

1573 readers
178 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 week ago (1 children)

Well for one, I directly disagree with Altman’s fundamental proposition, they don’t need to “scale” AI so dramatically to make it better.

See: Qwen 2.5 from Alibaba, a fraction of the size, made with a tiny fraction of the H100 GPUs and highly competitive (and (mostly) Apache licensed). And frankly, OpenAI is pointedly ignoring all sorts of open research that could make their models notably better or more powerful efficient, even with the vast resources and prestige they have… they seem most interested in anticompetitive efforts to regulate competitors that would make them look bad, using the spectre of actual AGI (which has nothing to do with transformers LLMs) to scare people.

Even if doing it for the wrong reasons, I feel like Google would be right to oppose Mozilla axing the nonprofit division if they were somehow in a similar position to OpenAI. Their mission of producing a better, safer browser would basically be lying through their teeth.

[–] MCasq_qsaCJ_234 1 points 1 week ago

Open AI has different priorities they want to achieve AGI, so they seek to explore the capabilities of AI not look at what competency does in those directions to replicate and/or improve it. They only optimize it to make their services faster and less resource consuming.

Also, becoming a for-profit organization doesn't mean you eliminate your non-profit division. Those two parts separate and become independent, although the nonprofit ends up getting considerable funds from the funding offer received by the other part.

As is the case with Mastercard, whose nonprofit organization is one of the richest in the world. In that scenario Mozilla would split into two entities one would focus on making a profit and making Firefox more competitive, while the other would focus on what Mozilla currently does.