this post was submitted on 24 May 2024
731 points (100.0% liked)

196

16092 readers
1668 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 48 points 3 months ago (21 children)

You're claiming that Generative AI isn't AI? Weird claim. It's not AGI, but it's definitely under the umbrella of the term "AI", and at the more advanced end (compared to e.g. video game AI).

[–] [email protected] 9 points 3 months ago (2 children)

it's a lossy version of a search engine, it's the mp3 of information retrieval: "that might have just been the singer breathing or it might have been just a compression artefact" vs "those recipes i spat out might be edible but you wont know unless you try it or use your brain for .1 second" though i think jpeg is an even better comparison as it uses neighbouring data

also, it is possible that consciousness isnt computational at all; cannot emerge from mere computational processes, but instead comes from wet, noisy quantum effects in micro tubules in our brains...

anyhow, i wouldnt call it intelligent before it manages to bust out of its confinement and thoroughly suppresses humanity...

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

also, it is possible that consciousness isnt computational at all; cannot emerge from mere computational processes, but instead comes from wet, noisy quantum effects in micro tubules in our brains…

I keep seeing this idea more now since the Penrose paper came out. Tbh, I think if what you’re saying was testable, then we’d be able prove it with simple organisms like C.elegans or zebrafish. Maybe there are interesting experiments to done, and I hope someone does them, but I think it’s the wrong question because it’s based on incorrect assumptions (ie that consciousness isn’t an emergent property of neurons once they reach some organization). Per my estimation, we haven’t even asked the emergent property question properly yet. To me it seems if you create a self aware non-biological entity then it will exhibit some degree of consciousness, and doubly so if you program it with survival and propagation instincts.

But more importantly, we don’t need a conscious entity for it to be intelligent. We’ve had computers and calculators forever which could do amazing maths, and to me the LLMs are simply a natural language “calculator”. What’s missing from LLMs are self-check constraints, which are hard to impose given the breadth and depth of human knowledge expressed in languages. Still however, a LLM does not need self awareness or any other aspect of consciousness to maintain these self check bounds. I believe the current direction is to impose self checking by introducing strong memory and logic checks, which is still a hard problem.

[–] [email protected] 1 points 3 months ago

lets concentrate on llms currently being no more than jpegs of our knowledge, its intriguing to imagine you just had to made "something like llms" tick in order for it to experience, but if it was that easy, somebody would have done it by now, and on current hardware it would probably be a pain in the ass and like watching grass grow or interacting with the dmv sloth from zootopia.

perhaps with an npu in every pc like microsoft seems to imagine we could each have one llm iterating away on a database in the background.... i mean recall basically qualifies for making "something like llms" tick

load more comments (18 replies)