this post was submitted on 30 Aug 2023
225 points (95.9% liked)

Technology

58111 readers
5296 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago (3 children)

Have you tried 3.5 or 4?

I haven't had many issues in 4. Occasionally it does what you're saying and I just say "bro, that doesn't exist" and it's like "oh, my bad, here you go." And gives me something that works.

[–] [email protected] 1 points 1 year ago (1 children)

I don't remember what version. I just gave up trying

[–] [email protected] 0 points 1 year ago (1 children)

Well don't expect it to just give magical results without learning prompt engineering and understanding the tools you're working with.

[–] [email protected] 2 points 1 year ago

Set-MailboxAddressBook doesn't exist.

Set-ADAttribute doesn't exist.

Asking for a simple command and expecting to receive something that actually exists is magical?

[–] [email protected] 1 points 1 year ago (2 children)

Just yesterday I had 4 make up a Jinja filter that didn't exist. I told it that and it returned something new that also didn't work but had the same basic form. 4 sucks now for anything that I'd like to be accurate.

[–] [email protected] 1 points 1 year ago

Both models have definitely decreased in quality over time.

[–] [email protected] 0 points 1 year ago (1 children)

What kind of prompts are you giving?

I find results can be improved quite easily with better prompt engineering.

[–] [email protected] 0 points 1 year ago (1 children)

It makes things up wholecloth and it's the user's fault for not prompting in correctly? Come on.

[–] [email protected] 0 points 1 year ago

It's not a person. It's a tool.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

I used gpt4 for terraform and it was kind of all over the place in terms of fully deprecated methods. It felt like a nice jumping off point but honestly probably would've been less work to just write it up from the docs in the first place.

I can definitely see how it could help someone fumble through it and come up with something working without knowing what to look for though.

Was also having weird issues with it truncating outputs and needing to split it, but even telling it to split would cause it to kind of stall.