this post was submitted on 07 Jun 2024
559 points (99.5% liked)

Technology

60085 readers
2426 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 10 points 6 months ago (2 children)

Until someone uses it for a little more than boilerplate, and the reviewer nods that bit through as it's hard to review and not something a human/the person who "wrote" it would get wrong.

Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

[–] [email protected] 6 points 6 months ago

Unless all the ai generated code is explicitly marked as ai generated this approach will go wrong eventually.

Undoubtedly. Hell, even when you do mark it as such, this will happen. Because bugs created by humans also get deployed.

Basically what you're saying is that code review is not a guarantee against shipping bugs.

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago)

Agreed, using LLMs for code requires you to be an experienced dev who can understand what it pukes out. And for those very specific and disciplined people it's a net positive.

However, generally, I agree it's more risk than it's worth