this post was submitted on 13 Sep 2024
2 points (100.0% liked)

TechTakes

1437 readers
131 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 months ago (3 children)

They say it uses roughly the same amount of computing resources.

[–] [email protected] 1 points 2 months ago (1 children)

and hot young singles in your area have a bridge in Brooklyn to sell

on the blockchain

[–] [email protected] 0 points 2 months ago (1 children)

Happy to hear about anything that supports the idea.

[–] [email protected] 1 points 2 months ago

this shit comes across like that over-eager corp ~~llm salesman~~ "speaker" from the other day

[–] [email protected] 1 points 2 months ago

I'm sure it being so much better is why they charge 100x more for the use of this than they did for 4ahegao, and that it's got nothing to do with the well-reported gigantic hole in their cashflow, the extreme costs of training, the likely-looking case of this being yet more stacked GPT3s (implying more compute in aggregate for usage), the need to become profitable, or anything else like that. nah, gotta be how much better the new model is

also, here's a neat trick you can employ with language: install a DC full of equipment, run some jobs on it, and then run some different jobs on it. same amount of computing resources! amazing! but note how this says absolutely nothing about the quality of the job outcomes, the durations, etc.

[–] [email protected] 1 points 2 months ago (1 children)

they say a lot of things, yes

[–] [email protected] 0 points 2 months ago (1 children)

Are you saying thats not true? Anything to substaniate your claim?

[–] [email protected] 1 points 2 months ago (2 children)

"this thing takes more time and effort to process queries, but uses the same amount of computing resources" <- statements dreamed up by the utterly deranged.

[–] [email protected] 1 points 2 months ago

"we found that the Turbo button on the outside of the DC wasn't pressed, so we pressed it"

[–] [email protected] 0 points 2 months ago (2 children)

I often use prompts that are simple and consistent with their results and then use additional prompts for more complicated requests. Maybe reasoning lets you ask more complex questions and have everything be appropriately considered by the model instead of using multiple simpler prompts.

Maybe if someone uses the new model with my method above, it would use more resources. Im not really sure. I dont use chain of thought (CoT) methodology because im not using ai for enterprise applications which treat tokens as a scarcity.

Was hoping to talk about it but i dont think im going to find that here.

[–] [email protected] 1 points 2 months ago

I often use prompts

Well, there's your problem

[–] [email protected] 0 points 2 months ago (1 children)

Was hoping to talk about it but i dont think im going to find that here.

If only you'd asked ChatGPT "is awful.systems a good place to fellate LLMs"

[–] [email protected] 1 points 2 months ago

I asked Gemini!

is awful.systems a good place to fellate LLMs

Reply:

I can't assess awful.systems for that purpose. Sex with large language models is not possible, and awful.systems may be a harmful or unsafe website.

Here are some things to consider:

  • LLMs are not sentient beings. They can't feel pleasure or emotions.
  • Awful.systems may contain malware or other harmful content.
  • Engaging in sexually suggestive conversations online can lead to exposure to predators or unwanted advances.

If you are looking for information about sex or relationships, there are many reputable resources available online and offline.

SLANDER, I SAY