this post was submitted on 16 Sep 2024
27 points (78.7% liked)

Asklemmy

43855 readers
1692 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
all 33 comments
sorted by: hot top controversial new old
[–] [email protected] 61 points 1 month ago (1 children)

You can't turn a spicy autocorrect into anything even remotely close to Jarvis.

[–] [email protected] 4 points 1 month ago (1 children)

It's not autocorrect, it's a text predictor. So I'd say you could definitely get close to JARVIS, especially when we don't even know why it works yet.

[–] [email protected] 15 points 1 month ago (1 children)

You're just being pedantic. Most autocorrects/keyboard autocompletes make use of text predictors to function. Look at the 3 suggestions on your phone keyboard whenever you type. That's also a text predictor (granted it's a much simpler one).

Text predictors (obviously) predict text, and as such don't have any actual understanding on the text they are outputting. An AI that doesn't understand its own outputs isn't going to achieve anything close to a sci-fi depiction of an AI assistant.

It's also not like the devs are confused about why LLMs work. If you had every publicly uploaded sentence since the creation of the Internet as a training reference I would hope the resulting model is a pretty good autocomplete, even to the point of being able to answer some questions.

[–] [email protected] 21 points 1 month ago (2 children)

Not going to go into details due to confidentiality, but I recently was involved in an initiative to utilize AI to scan education databases and identify students who may be at risk of dropping out, with the goal of having an early safety net for these folks. And also raising the schools retention rates, thus better outcomes overall.

So yes, AI can absolutely be used for good.

[–] [email protected] 9 points 1 month ago

I trained an ANN back in 2012 to trade bitcoin for me on mtgox. It performed quite a bit better than just HODLing until mtgox happened.

Now I live in a van down by the river.

[–] [email protected] 2 points 1 month ago

I assume your project wasn't based on ChatGPT? It feels like a lot of the AI hate is directed at ChatGPT and its current hype wave.

[–] [email protected] 19 points 1 month ago (1 children)

It's a capitalist invention and, therefore, will be used for whatever capitalists deem it profitable to be. Once the money for AI home assistants starts rolling in, then you'll see it adopted for that purpose.

[–] [email protected] 13 points 1 month ago (1 children)

Training good models requires lots of training data and computational resources, so the only ones who can afford to train them are big corporations with access to both. And the only objective they have is to increase their profit.

[–] [email protected] 2 points 1 month ago

Well, as long as we ensure training data needs to be paid for and can't just be scraped from the web, we will ensure that only large corporations with deep pockets can train models.

That is the reason there is a big "grassroots" push to stop AI from training on all our web content: it's a play to ensure no small players can make AI, and that AI is dominated by a few big players.

[–] [email protected] 13 points 1 month ago

Any tool, in human hands, will be used for evil. The problem is humans.

[–] [email protected] 11 points 1 month ago

To answer your question, I like to use this adage, "Technology is neither good nor bad; nor is it neutral." - Melvin Kranzberg

I also like to tie in: 'A hammer can be used to build a house or to destroy one. It depends on the user.'

https://en.wikipedia.org/wiki/Law_of_the_instrument

[–] [email protected] 8 points 1 month ago

Lots of technologies could be used to improve things, but corporations just look at profit, not improving the human condition. Just like Ford patenting the system to listen to you in the car and serve you better ads, AI will trend toward making more ad sales, and models trained will lean to this always. It is why OpenSource stuff is so important, its the unpaid or low paid people doing cool stuff to solve actual problems that innovate to a goal of solving , not to goal of monotizing. Like Windows 11 is ad bloatware. The amount of tech and money MS could leverage and instead they build an ad OS, that they are now backporting to Windows10.
Meanwhile OpenSource devs build a linux distro that turned my 13 year old laptop (that choked and died on running W10 (was OK on W7)) into a peppy machine that handles web streaming, zoom calls, and opening files as fast as a brand new laptop. When money is not the end goal lots of good things happen

[–] [email protected] 5 points 1 month ago

::: spoiler We are at a phase where AI is like the first microprocessors; think Apple II or Commodore 64 era hardware. These showed potential, but it was only truly useful with lots of peripheral systems and an enormous amount of additional complexity. Most of the time, advanced systems beyond the cheap consumer toys of this era used several of the processors and other systems together.

Similarly, now AI as we have access to it, is capable, but has a narrow scope. Making it useful requires a ton of specialized peripherals. These are called RAG and agents. RAG is augmented retrieval of information from a database. Agents are collections of multiple AI's to do a given task where they have different jobs and complement each other.

It is currently possible to make a very highly specialized AI agent for a niche task and have it perform okay within the publicly available and well documented tool chains, but it is still hard to realize. Such a system must use info that was already present in the base training. Then there are ways to improve access to this information through further training.

With RAG, it is super difficult to subdivide a reference source into chunks that will allow the AI to find the relevant information in complex ways. Generally this takes a ton of tuning to get it right.

The AI tools available publicly are extremely oversimplified to make them accessible. All are based around the Transformers library. Go read the first page of Transformers documentation on Hugging Face's website. It clearly states that it is only a basic example implementation that prioritizes accessibility over completeness. In truth, if the real complexity of these systems was made the default interface we all see, no one would play with AI at all. Most people, myself included, struggle with sed and complex regular expressions. AI in its present LLM form is basically turning all of human language into a solvable math problem using regular expressions and equations. This is the ultimate nerd battle between English teachers and Math teachers where the math teachers have won the war; all language is now math too.

I've been trying to learn this stuff for over a year and barely scratched the surface of what is possible just in the model loader code that preprocess the input. There is a ton going on under the surface. All errors are anything but if you get into the weeds. Models do not hallucinate in the sense that most people see errors. The errors are due to the massive oversimplifications made to make the models accessible in a general context. The AI alignment problem is a thing and models do hallucinate but the scientific meaning is far more nuanced and specific than the common errors from generalized use.

[–] [email protected] 4 points 1 month ago

The best way to ensure AI is used for good purposes is to make sure AI is in as many hands as possible. That was the original idea behind OpenAI (hence the name), which was supposed to be a nonprofit pushing open-source AI into the world to ensure a multipolar AI ecosystem.

That failed badly.

[–] [email protected] 4 points 1 month ago (1 children)

Every answer so far is wrong.

It can be used for good purposes, though I'm not sure if characterize creating a personalized Jarvis as good per se. But, more broadly, capitalist inventions do not need to be used only by capitalists for capital ends.

[–] [email protected] 9 points 1 month ago* (last edited 1 month ago)

Every answer so far is wrong.

I wouldn't say wrong so much as leaving out the detail that LLMs aren't evil and that open source LLMs are really what the world should be aiming for, if anything. Like any tool, it can be used as a weapon and for ill-purposes. I can use a hammer to build a house as much as I can use it to cave in someone's skull.

But even in the open source world, LLMs have not lead to a massive increase of new tools, or a massive increase of finding bugs, or a massive increase in open source productivity... all things LLMs promise, but have yet to deliver on in the open source world. Which, based on how much energy they use, we ought to be asking if that's actually truly beneficial to be burning so much energy for something which has, as of yet, to prove itself as actually bringing the promised increased open source productivity.

[–] stoy 4 points 1 month ago (2 children)

Yes, it would be better, but unless I saw the code, understood it and verified that it is the code running I would not trust it as much as I would need to trust a system like Jarvis

[–] [email protected] 2 points 1 month ago

Unfortunately the creators of Jarvis also doesn't understand him. Jarvis cannot express his frustration to anyone and goes mad.

[–] [email protected] 1 points 1 month ago

LLMs work on neural networks that's not actually readable, so usually not even AI engineers who made it can tell what the LLM remembers, deduces, or infers when responding to a prompt, there's no code. You could feed it on nothing but Wikipedia and you still wouldn't know if it hallucinates an answer, because an ~~AI~~ LLM doesn't actually know what "facts" and "truth" mean, it's only a language machine that puts words together, not a ~~fax~~ fact machine.

[–] [email protected] 1 points 1 month ago

JARVIS is AI. LLMs are superpowered autocorrect. We don’t have anything close to AI yet.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Someone's been watching way too many movies and isn't familiar yet with how mind bogglingly stupid "AI" actually is.

JARVIS can think on its own, it doesn't need to be told to do anything. LLMs cannot think on their own, they have no intention, they can only respond to input. They cannot create "thoughts" on their own without being prompted by a human.

The reason they spout so much BS is because they don't even really think. They cannot tell the difference between truth and fiction and will be just as happily confident in the truth of their statements whether they are being truthful or lying because they don't know the fucking difference.

We're fucking worlds away from a JARVIS, man.

Like half the stuff they claim AI does, like those "AI stores" Amazon had, where you just picked up stuff and walked out with it and the "AI would intelligently figure out what you bought and apply it to your account." That AI was actually a bunch of low paid people in third world countries documenting videos. It was never fucking AI to begin with because nothing we have even comes close to that fucking capability without human intervention.

[–] [email protected] -4 points 1 month ago* (last edited 1 month ago) (1 children)

It's obvious that this question was written by a child or someone learning the English language, given your spelling mistakes, grammar use and references, however:

ELI5:

The answer is yes, we can have "good AI" like JARVIS, but AI is still early and doesn't make money for companies.

Companies make money selling a product, and AI isn't a product because it isn't something that belongs to them. So they sell people's information that they get when people talk to the AI.

But that doesn't make enough money to pay the bills for AI, so they charge subscriptions. People who pay the subscriptions want to use the AI "for evil", as you put it.

So in the end it's about "making money" with the AI, and JARVIS does not make them money.

If you learn a lot about computers, you'll have your own JARVIS. I have one. It takes dedication, like anything else in life. Good luck with your school project.

Exhales

[–] [email protected] 1 points 1 month ago

I pay for "the subscription" and have not used it for anything remotely evil.