Akisamb

joined 1 year ago
MODERATOR OF
[–] [email protected] 5 points 2 months ago

French here we use both the middle and the left. It depends on the group of friends.

[–] [email protected] 6 points 2 months ago

Now instead of just querying the goddamn database, a one line fucking SQL statement, I have to deal with the user team

Exactly, you understand very well the purpose of microservices. You can submit a patch if you need that feature now.

Funnily enough I'm the technical lead of the team that handles the user service in an insurance company.

Due to direct access to our data without consulting us, we're getting legal issues as people were using addresses to guess where people lived instead of using our endpoints.

I guess some people really hate the validation that service layers have.

[–] [email protected] -1 points 2 months ago* (last edited 2 months ago) (2 children)

En même temps, pas très malin d'utiliser le dog whistle <<du fleuve jusqu'à la mer>>.

Ce qui sous entend au pire un génocide et au mieux la situation des juifs de l'Algérie après la libération.

Assez surpris que LFI l'ait accepté. Après les républicains ont bien Meyer Habib.

[–] [email protected] 3 points 2 months ago

Ça fonctionne bien, ça m'a bien mis Volt comme premier choix.

[–] [email protected] 6 points 3 months ago (2 children)

La Russie a fait tout ce qu'elle a pu pour éviter la guerre en essayant de menacer ses voisins pour éviter qu'ils rejoignent l'OTAN ?

Surtout que ces menaces n'ont pas arrêté après l'invasion de la Crimée. l'Ukraine a le droit de former les alliances qu'elle veut, surtout pour se défendre d'un envahisseur qui ne veut pas arrêter.

Paie ta propagande Russe.

[–] [email protected] 1 points 3 months ago

Le gouvernement avait mis en place une interdiction de ventes d'armes plus stricte en nouvelle Calédonie.

Cette interdiction a été supprimé par le gouvernement local en 2011.

Évidemment que le gouvernement ne désire pas cette situation, ça provoque de l'instabilité. C'est comme dire que l'OAS a aidé la France en Algérie.

[–] [email protected] 2 points 3 months ago (1 children)

You are in a bubble. A neo nazi march was banned two weeks ago in France before being allowed again by the judicial system. The exact same scenario has been repeating for pro-palestine protests.

At least in France, the scenario seems to be that the government wants to ban any controversial march and is being kept under control by the justice system.

[–] [email protected] 8 points 3 months ago* (last edited 3 months ago) (1 children)

I also have a similar experience, I was mugged at knife point and spit on by two adolescents. After that I was jumpy around groups of teens.

That said , I do not think my fear of teens was rational, neither was it healthy. Only a small minority of teens will mug people. Fearing a whole group for the actions of the few is in human nature, but it is something we must fight against.

I mean what is the end goal if women are in fear of men ? You can probably reduce violent crime even more, but it remains a rare event. Only 31 out of 1000 people were victims of a violent crime in the UK in 2010. If that doesn't work, what remains? Sex segregation ?

[–] [email protected] 3 points 3 months ago

Les rames de train ne sont qu'une partie du billet. Le gros reste l'entretien des rails (et remboursement des prêts pour construire ces rails). Vu le prix au km des nouvelles constructions, le prix du TGV ne risque pas de baisser.

[–] [email protected] 2 points 3 months ago (1 children)

Explication alternative, un procureur de la république peut porter plainte d'office,c'est-à-dire sans qu'il ait été saisi par la victime.

[–] [email protected] 15 points 4 months ago (1 children)

I'm afraid that would not be sufficient.

These instructions are a small part of what makes a model answer like it does. Much more important is the training data. If you want to make a racist model, training it on racist text is sufficient.

Great care is put in the training data of these models by AI companies, to ensure that their biases are socially acceptable. If you train an LLM on the internet without care, a user will easily be able to prompt them into saying racist text.

Gab is forced to use this prompt because they're unable to train a model, but as other comments show it's pretty weak way to force a bias.

The ideal solution for transparency would be public sharing of the training data.

 

cross-posted from: https://kbin.social/m/machinelearning/t/98088

Abstract:

Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at https://inversescaling.com/data to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.

 

Hyena Hierarchy seems to aim to be a drop-in replacement for attention : https://arxiv.org/pdf/2302.10866.pdf

It looks good on paper, but I haven't been able to find anybody using it in a model. Does anyone have an example of a code or implementation ? Is there really a big improvement on long context lengths ?

view more: ‹ prev next ›