dartos

joined 1 year ago
[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (2 children)

I have no goal here. Just sharing my opinions. Not failing to do anything.

Yeah being aggressive is good for driving people away. And yknow given that your goal is actually to drive people away I was wrong to say it’s immature.

I just don’t like aggression. I don’t go on the internet looking for fights.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (5 children)

They can do whatever they want.

I don’t care what other people do, I just ignore people I don’t think are worth failing with.

And yeah pass judgement if you want, but how I choose to deal with people on the internet is up to me.

[–] [email protected] 1 points 11 months ago (3 children)

It’s just polarizing. You’re just making people more staunch in their beliefs or just annoying people who would rather not deal with aggression (like myself)

If your goal is to drive people away and make a space where everyone just agrees with you all the time then it’s effective.

[–] [email protected] 45 points 11 months ago (2 children)

I’m dead 💀

[–] [email protected] 1 points 11 months ago (8 children)

Yknow I’m talking about on social media platforms, right?

Frothing at the mouth raging at someone on a social media platform doesn’t do anything but cause more radicalization, so I just ignore people instead. I don’t spend most of my life fighting with people on the internet over politics.

[–] [email protected] 2 points 11 months ago

Indexing and tools like llamaindex use LLM generated embeddings to “intelligently” search for similar documents to a search query.

Those documents are usually fed into an LLM as part of the prompt (eg. context)

[–] [email protected] 3 points 11 months ago

Hey yknow that’s a good point.

[–] [email protected] 5 points 11 months ago* (last edited 11 months ago) (2 children)

Yes, you can craft your prompt in such a way that if the llm doesn’t know about a referenced legal document it will ask for it, so you can then paste the relevant section of that document into the prompt to provide it with that information.

I’d encourage you to look up some info on prompting LLMs and LLM context.

They’re powerful tools, so it’s good to really learn how to use them, especially for important applications like legalese translators and rent negotiators.

[–] [email protected] 9 points 11 months ago* (last edited 11 months ago) (5 children)

Generally, training an llm is a bad way to provide it with information. “In-context learning” is probably what you’re looking for. Basically just pasting relevant info and documents into your prompt.

You might try fine tuning an existing model on a large dataset of legalese, but then it’ll be more likely to generate responses that sound like legalese, which defeats the purpose

TL;DR Use in context learning to provide information to an LLM Use training and fine tuning to change how the language the llm generates sounds.

[–] [email protected] 3 points 11 months ago* (last edited 11 months ago) (1 children)

You didn’t present any ideas or solutions to argue against. There’s no argument happening here.

Nor are there strawmen because there’s no argument being made.

You said that there’s generally a lack of imagination with regards to this stuff and I was just sharing my opinions as to why.

 

I get meta evil, but aren’t we just blocking out any users from accessing the wider fediverse?

view more: next ›