this post was submitted on 13 Nov 2023
234 points (96.4% liked)
Technology
59300 readers
4556 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What do you do when ChatGPT just makes shit up or answers incorrectly to yes or no questions, you'd have no way of knowing it was wrong
ChatGPT is most useful when you may not know the right answer, but you know a wrong answer when you see one. It's very useful for technical issues. Much quicker for troubleshooting than searching page after page for a solution.
It’s actually great at troubleshooting linux stuff weirdly enough lol
Web/full-stack development?
Yeah that makes sense. The success rate might fall off a cliff in more complex software projects. E.g. applications that require designs beyond 10 UML boxes with hundreds of thousands of lines, especially not written in JS/Python.
Can you post the app?
While this is an important thing to understand about AI, it's an overstated issue once understood. For most questions I ask AI, it doesn't matter if it's correct as long as it pulls some half useful info to get me on track (i.e programming). For other questions, I only ask it if I need to figure out where to look next, which it will usually do just fine.
The first page of my search results is all AI generated garbage articles anyway, at least I know what I am getting with GPT and can take it as such.
Yup, as long as you are aware that it could be wrong and look at it critically LLMs at GPT scale are very useful tools. The best way I've heard it described is having a lightning fast intern who often gets things wrong but will always give it a go.
So long as you're calibrated to "how might this be wrong" when looking at the results it is exceptionally useful.
Not the other commenter:
I usually have an idea about the thing I'm asking, and if not then I'll look up the topics mentioned after some guided brainstorming
I've also found that asking the same question again, after resetting the chat, can give you an idea of what is happening
Bing AI provides reference in the "more precise" version