When a compiler takes my human readable code and converts it into executable machine code, that’s AI
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
when the machine executes code, that's AI
Lol I really doubt that.
I wouldn't be surprised if it's technically true but it's more like, coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code. Like same shit as Github's CoPilot that came out years ago, nothing special at all
None of that 25% of AI generated code wasn't very heavily initiated and carefully crafted by humans every single time to ensure it actually works
It's such a purposeful misrepresentation of labour (even though the coders themselves all want to automate away and exploit the rest of the working class too)
coder starts writing out a line of code, AI autocompletes the rest of the line, and then the coder has to revise and usually edit it. And the amount of code by character count that the AI autocompleted is 25% of all new code.
When you dig past the clickbait articles and find out what he actually said, you're correct. He's jerking himself off about how good his company's internal autocomplete is.
I'm not going to read it but I bet it's nowhere near as good as he thinks it really is
I wouldn't be surprised if the statistics on "AI generated code" was like, I type 10 characters, I let AI autocompleted the next 40 characters, but then I have to edit 20 of those characters, and the AI tool counts "40 characters" as "AI generated" since that was what was accepted
Not to mention since it's probably all trained on their own internal codebase and there's a set certain coding style guide, it'd probably perform way worse for general coding if people weren't all trying to code following the exact same patterns, guidelines, and libraries,
I assume that's what it is as well. I'm guessing there's also a lot of boilerplate stuff and they're counting line counts inflated by pointless comments, and function comment templates that usually have to get fully rewritten.
He is probably just lying.
Yeah this is for investors
I write minified JavaScript and the AI pretty-prints it with 8-space indentation. That’s well over 25% by weight.
Is that why search results are getting so much worse so fast and it tells you to eat rocks to stay healthy?
Lol I'd bet 90% of that is of equal quality to the code you get by measuring lines written.
Another 9% is likely stolen.
The final 1% won't even compile, doesn't work right, or needs so much work you'd be better off redoing it.
The only useful result I've had with CS is asking for VERY basic programs that I have to then check the quality of. Besides that, I had ONE question that I knew would be answered in a text book somewhere, but couldn't get a search hit about. (I think it was something about the most efficient way to insert or sort or something like that.)
Worked with it a bit at work and the output was so unreliable I gave up and took the best result it gave me and hard coded it so I could have something to show off. Left it as a "in the future..." thing and last I heard its still spinning in the weeds.
I often help beginners with their school programming assignments. They're often dumbfounded when I tell them "AI" is useless because they "asked it to implement quicksort and it worked perfectly".
The next batch of software engineers are going to have huge dependency problems.
In what kind of workflow? Because if I start typing, my copilot generates 20 lines, and I edit that 20 lines down to 5 that will compile and bear little resemblance to what was generated, I feel like that should count as 0 AI lines, but I have a feeling it counts for more.
When dependabot makes a pull request, that’s AI
SCoC is the only measurement worse than SLoC that I can think of
Riiiight. And I bet he'd tell you that 25% of their servers were powered by cold fusion if it were the newest thing that got investors to throw bags of money at them.
We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.
When text editors automatically create templates for boilerplate, that's AI.
Stfu and bring back cached websites.