this post was submitted on 04 Jan 2024
24 points (100.0% liked)

Programming

13366 readers
1 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 1 year ago
MODERATORS
 

cross-posted from: https://programming.dev/post/8121843

~n (@[email protected]) writes:

This is fine...

"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 16 points 10 months ago

This isn't even a debate lol..

Stuff like CoPilot is awesome at making code that looks right, but contains subtle wrong variable names it's self-created, or bad algorithms.

And that's not the big issue.

The big issue is when you get distracted for 5 mins, you come back, and you forget that you've been working through that block of AI generated code (which looks correct), so you forget to check the rest of it, and it makes it into the source code, before testing later, only to realise its screwed because its AI generated code.

The other big issue, is that its only a matter of time until people start to get fed up, and start feeding these systems dodgy data to de-train them and make them worse / with backdoors.