this post was submitted on 16 Jan 2024
382 points (93.8% liked)

Games

16679 readers
663 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 37 points 9 months ago (2 children)

Not necessarily. Generative AI hasn't been advancing as much as people claim, and we are getting into the "diminishing returns" phase of AI advancement. If not, we need to switch gears in our anti-AI activism

[–] [email protected] 12 points 9 months ago (1 children)

Yep. IMO it'll be kinda like VR. AI will sort of plateau for awhile until they find a new approach and then the hype will kick up again. But the current approach won't scale into true AI. It's just fundamentally flawed.

[–] [email protected] 6 points 9 months ago* (last edited 9 months ago) (2 children)

hmm idk... the only real reason vr has playeued so hard is because of the high barrier to entry. the tech is fine, but there's not that many good games because it's expensive and not many own it.

I'd argue that ai will continue to see raid growth for a little while. the core technology behind LLMs may be plateauing, but the tech is just now getting out in the world. people will continue to find new and creative ways to extend its usefulness and optimize what it's currently capable of.

basically, back to the vr example. people are gonna start making "games" for it. did one's free, and everyone is hungry for it. I'm putting my money on human creativity for now...

[–] [email protected] 3 points 9 months ago

This, VR and AI are completely different beasts

[–] [email protected] 1 points 9 months ago (1 children)

I wasn't claiming the tech was similar. But VR has had several surges in hype over the years. It'll come to the forefront for awhile, then fade to the background again, until something else happens to bring it back to people's attention again.

I think AI hype will die down until someone comes up with some new way to hype it, probably through a novel approach that isn't LLM.

[–] [email protected] 3 points 9 months ago (2 children)

I mean no offense here, but I think your take reflects how few relatively ground-shattering innovations have really happened over the last twenty years or so. I mean truly life-changing. Maybe the internet was last, I'm unsure.

I'm probably too young to have an accurate idea of how often an innovation is supposed to change the world, but it really feels like we've become used to seeing new tech that only changes life incrementally at best. How many people, if such an innovation was created, would fail to recognize it or reject it altogether? Entire generations to this day refuse to learn computer literacy, which actively detriments them on a daily or weekly basis.

Won't update their insurance because they don't want to use a computer. Don't know how to reboot a router/modem. Don't know how to change their password. Congressmen asking if Facebook/TikTok requires Internet access. Some small companies operating exclusively on fax and printed paper, copying said paper, sorting said paper, and then re-faxing it instead of automating or even just using one PC (I worked at a place like this).

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago) (1 children)

I would argue smart phones were the last game changer (iPhone was 2007 I think). If you're privileged (like in the grand scale of the world), a smartphone is a quality of life upgrade. But all over the planet, the access to wifi combined with a super cheap smartphone allows people to start businesses they otherwise would have been able to, open and manage bank accounts etc. when it would have never been possible.

I kind of see the logic of dismissing AI as a trend, only because pointing to each tech dad and claiming it will change the world gets old, and saying "I called it" 10 years later when it does change the world doesn't really do anything.

But at the same time, chat gpt3 is only a little over year old, which I would mark as the beginning of public enthusiasm and attention for AI. Really great voice recreation AI is even newer, and both are already shredding through entertainment, calling out a "plateau" when it's only "plateaued" for a few months is a little hasty.

Edit: I know the person I replied to wasn't on the other side of this, I was just continuing the convo.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

I find myself caught between two forces on this issue. My dad is one of those tech dads, who watches David Shapiro and builds his own GPTs in his free time. He is convinced that AI has (or will imminently have) the ability to replace us as workers entirely. Economically, we are not ready for that. People who don't work just don't get to have anything. Food and housing aren't even universal human rights.

The urge for me to stick my head in the sand, despite my father pushing me to learn to use AI, is very real. I don't have faith that we as a society will be able to make a good future with AI. So my only option feels like learning to build, manipulate, and wield the tool that I believe could cause enormous societal upheaval, because the alternative is to be upheaved like a modern boomer dropped in the middle of Cyberpunk's Night City.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

Yeah the dark future is that AI takes up all of the office type jobs that just require reading and writing text and the only jobs left are physical labor that everyone is forced into to survive whether we need to do it or not.

My hope is that when we have AI to do these "intellectual" jobs and machines to do the manual labor there will necessarily be not enough work for everyone and society will be forced to reckon with the fact that we have so much abundance that humans don't need to work if they don't want to. Something that's been true for a while but has been swept under the rug for a number of reasons.

[–] [email protected] 2 points 9 months ago

Definitely agree. Innovation has slowed down. Time will tell if the 1900s was just a complete fluke or not, but personally I think at least part of the slow down is due to the slow collapse of capitalism and democracy. Feels like we're just trudging along purely on inertia since the end of WW2. Like we're an old beat up car that's been patched and duct taped together so much. The whole system feels like it's just ground to a halt and all 'new' progress is just marketing from grifters who've captured the system.

[–] [email protected] 1 points 9 months ago (1 children)

It's all about the models and training, though. People thinking ChatGPT 3.5/4 can write their legal papers get tripped up because it confabulates ('hallucinates') when it isn't thoroughly trained on a subject. If you fed every legal case for the past 150 years into a model, it would be very effective.

[–] [email protected] 3 points 9 months ago (1 children)

We don't know it would be effective.

It would write legalese well, it would recall important cases too, but we don't know that more data equates to being good at the task.

As an example ChatGPT 4 can't alphabetize an arbitrary string of text.

Alphabetize the word antidisestablishmentarianism

The word "antidisestablishmentarianism" alphabetized is: "aaaaabdeehiiilmnnsstt"

It doesn't understand the task. It mathematically cannot do this task. No amount of training can allow it to perform this task with the current LLM infrastructure.

We can't assume it has real intelligence, we can't assume that all tasks can be performed or internally represented, and we can't assume that more data equals clearly better results.

[–] [email protected] 1 points 9 months ago (1 children)

That’s a matter of working on the prompt interpreter.

For what I was saying, there’s no assumption: models trained on more data and more specific data can definitely do the usual information summary tasks more accurately. This is already being used to create specialized models for legal, programming and accounting.

[–] [email protected] 1 points 9 months ago (1 children)

You're right about information summary, and the models are getting better at that.

I guess my point is just be careful. We assume a lot about AI's abilities and it's objectively very impressive, but some fundamental things will always be hard or impossible for it until we discover new architectures.

[–] [email protected] 3 points 9 months ago

I agree that while it’s powerful and the capabilities are novel, it’s more limited than many think. Some people believe current “ai” systems/models can do just anything, like legal briefs or entire working programs in any language.The truth and accuracy flaws necessitate some serious rethinking. There are, like your above example, major flaws when you try to do something like simple arithmetic, since the system is not really thinking about it.