this post was submitted on 11 Aug 2023
50 points (100.0% liked)

Technology

37585 readers
338 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

It just feels too good to be true.

I'm currently using it for formatting technical texts and it's amazing. It doesn't generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I'm volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 23 points 1 year ago (2 children)

It not being conscious or self aware. It's just putting words together that don't necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

I'd also be VERY surprised if it isn't harvesting people's data in the exact way you've described.

[–] [email protected] 6 points 1 year ago

you don't need to be surprised, in their ToS is written pretty big that anything you write to chatGPT will be used to train it.

nothing you write in that chat is private.

[–] [email protected] 4 points 1 year ago

It not being conscious or self aware.

That's correct, its whole experience is limited to a ~2000 word text prompt (that includes your questions, as well as previous answers). Everything else is a static model with a bit of randomness sprinkled in so it doesn't just repeat. It doesn't learn. It doesn't have long term memory. Every new conversation starts from scratch.

User data might be used to fine tune future models, but it has no relevance for the current one.

It’s just putting words together that don’t necessarily have any meaning. It can simulate language but meaning is a lot more complex than putting the right words in the right places.

This is just wrong, but despite being frequently parroted. It obviously understands a lot. Having a little bit of conversation with it should make it very clear. You can't generate language without understanding the meaning, people have tried before and never got very far. The only problem it has is that its understanding is only of language, it doesn't know how language relates to other sensory inputs (GPT-4 has a bit of image stuff build in, but it's all still a work in progress). So don't ask it to draw pictures or graphs, the results won't be any good.

That said, it's surprising how much knowledge it can extract just from text alone.