this post was submitted on 17 Sep 2023
25 points (79.1% liked)

Futurology

1774 readers
91 users here now

founded 1 year ago
MODERATORS
all 21 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 1 year ago* (last edited 1 year ago) (2 children)

Am I some old fuck at ripe age of 30 that haven't needed to use AI so far? The whole field seems like an astroturfing campaign, insisting that you definitely need to incorporate AI tools in your day to day work.

The time it takes for me to prompt AI tools is longer than the time it takes for me to do the work myself.

[–] [email protected] 2 points 1 year ago

Meh, there are already whole classes of jobs decimated by AI a decade ago. This isn't new, just gaining attention now, and the ball is continuing to pick up speed. AI improves a lot faster than people do, unless it hits some kind of fundamental wall of development then it's just a matter of time before it comes for us all.

[–] [email protected] 1 points 1 year ago

Depends on the work you're doing.

[–] [email protected] 9 points 1 year ago

This was a fascinating article and I enjoyed seeing their summary of their research into how even current AI can impact the way people work, for the better. However, I can almost guarantee that most companies, especially the highly wealthy ones, won't be using AI in the way the author suggests.

I'm going to speculate that we'll start to see a bell curve, where small startups use AI to replace workers due to the cost of bringing on new team members, medium sized companies use it to augment their employees output, and large companies will layoff workers and replace them with AI; the latter of which is already happening.

Why?

Because the same pattern seems to be present when talking about company morality and ethics. While many smaller companies and startups tend to pledge to improving worker's rights, shrinking the company's carbon footprint, improving customer relations and/or increasing the quality of their products, they typically don't have the capital to truly commit to these values.

Medium sized companies tend to have the capital to fully commit to the values laid out when they were smaller, while not yet being large enough to experience the full force of capitalistic greed.

Finally, large companies have the capital to maintain their stated values, but often discover that said values run contrary to those held by their shareholders and board of directors (that being that greed is good and seeking infinite growth). Additionally, many of those companies are reaching full market saturation (if they haven't already achieved it) and find that they have to begin sacrificing their values in exchange for those dictated to them by their board of directors. The result is that they tend to be all talk, little action.

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (2 children)

Oh yeah baby. Let's cram this absolute speedrun of enshittification up the arse of everything possible!

It's not AI.

[–] [email protected] 3 points 1 year ago (1 children)

Let me introduce you to Dr Angela Collier

https://www.youtube.com/watch?v=EUrOxh_0leE

And have very nice day.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Pretty interesting. I didn't know Dr Collier, quite similar speech from a former Samsung VP (also Co creator of Siri)

Mr Luc Julia

https://www.youtube.com/watch?v=6prCHASkavM

[–] [email protected] 3 points 1 year ago (1 children)

Not that I disagree but just to understand where you're coming from, what definition are you using for AI? And Intelligence for that matter?

Coming at this from a compsci/comp e viewpoint I think of it simply as "the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making,..."

By that definition it absolutely exists. On our smart phones even.

Of course in each of these areas it isn't on par with human intelligence by any stretch. Often it's far more limited. But it can also be better at certain specific tasks. Most of my limited familiarity is with computer vision but I think that illustrates how far off the mark it is from human intelligence. It is insanely difficult for MV to identify a thing. You can train it to identify one or a few or maybe a small set of things.

But it is easily confused by different ambient lighting intensity or hue, shadows, objects partially obscuring the thing, and myriad other conditions.

Meanwhile humans can identify an enormous number of objects in all sorts of conditions, easy-peasy. By a young age even. I hadn't fully appreciated how sophisticated our abilities were until I started looking at the artificial side of it.

Anyway, all that said, to me the real issue is what new developments in AI (as I defined) mean to society at large. How do jobs change, how does it affect quality of life, quality of products and services, does it change how we as a society value those things (art, writing) that can be partly replaced?

[–] [email protected] 2 points 1 year ago (1 children)

i like your definition of these ai tools. Its feels broad enough to cover all of the recent accomplishments so many are praising.

Many people aren’t able to distinguish that the software is just a tool and even less so as it becomes more autonomous

[–] [email protected] 1 points 1 year ago (1 children)

I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn't magic and it isn't emergent behavior. It isn't truly intelligent.

LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it's nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and "good" human responses.

There is no understanding behind it. No higher cognitive process. Just "what words go next based on Q&A training data." Which is why we get well written answers that are often total bullshit.

Even so, the tech could easily upend many writing careers.

[–] [email protected] 2 points 1 year ago

I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.

[–] [email protected] 7 points 1 year ago (1 children)

I really gotta hop on this AI train so I can increase my quality by about 2, whatever the fuck that means.

[–] [email protected] 3 points 1 year ago (2 children)

Yawn.

Wake me when any of this AI nonsense actually does anything useful.

[–] [email protected] 3 points 1 year ago

I have been able to create proof of concept and prototype code for several projects at work, while barely having any knowledge.

This has helped speed up prjects by a fair amount, because we don't need to bog down our DEVs with this sort of work and van focus on building solutions once they have proven viable like that.

Not to mention the ability for me to automate a certain amount of my work, freeing me up for other tasks.

I also find it very useful for overcoming writers block or organizing my thoughts to untie some mental knots. All this can be done without giving it too much information as well. I have found it to be tremendously useful but I see why people don't feel the same way. Especially the free version of chatgpt 3.5 is far worse than 4 and 4 had a huge dip in quality as they implemented safe guards and slowed it down to handle computing costs. But it is picking back up now.

[–] [email protected] 2 points 1 year ago (1 children)

I mean it can make porn... And at least pretty decent pictures if the prompts are good.

[–] [email protected] 1 points 1 year ago (1 children)

There's AIPorn community but those images are more creepy than anything. I don't get why people would find them hot.

[–] [email protected] 1 points 1 year ago (1 children)

I don't like such stuff either.

[–] [email protected] 2 points 1 year ago (1 children)

and yet here you are introducing it to us. hmmmm

[–] [email protected] 1 points 1 year ago (1 children)

You have already heard of it.

(and tbh it makes good Hentai)

[–] [email protected] 1 points 1 year ago

sounds pretty new to me. sounds niche too actually. lemmy itself almost a niche.