this post was submitted on 11 Oct 2023
505 points (92.6% liked)

Technology

59651 readers
3817 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 102 points 1 year ago (2 children)

Sounds like the internet in the 90s.

[–] [email protected] 67 points 1 year ago* (last edited 1 year ago) (4 children)

It also reminds me of crypto. Lots of people made money from it, but the reason why the technology persists has more to do with the perceived potential of it rather than its actual usefulness today.

There are a lot of challenges with AI (or, more accurately, LLMs) that may or may not be inherent to the technology. And if issues cannot be solved, we may end up with a flawed technology that, we are told, is just about to finally mature enough for mainstream use. Just like crypto.

To be fair, though, AI already has some very clear use cases, while crypto is still mostly looking for a problem to fix.

[–] [email protected] 19 points 1 year ago

Let's combine AI and crypto, and migrate it to the cloud. Imagine the PowerPoints middle managers will make about that!

[–] [email protected] 17 points 1 year ago* (last edited 1 year ago) (2 children)

No, this isn't crypto. Crypto and NFTs were trying to solve for problems that already had solutions with worse solutions, and hidden in the messaging was that rich people wanted to get poor people to freely gamble away their money in an unregulated market.

AI has real, tangible benefits that are already being realized by people who aren't part of the emotion-driven ragebait engine. Stock images are going to become extinct in several years. People can make at least a baseline image of what they want, no matter the artistic ability. Musicians are starting to use AI tools. ChatGPT makes it easy to generate low-effort, high-time-consuming letters and responses like item descriptions, or HR responses, or other common draft responses. Code AI engines allow programmers to present reviewable solutions in real-time, or at least something to generate and tweak. None of this is perfect, but it's good enough for 80% of the work that can be modified after the initial pass.

Things like chess AI has existed for decades, and LLMs are just extensions of the existing generative AI technology. I dare you to tell Chess.com that "AI is a money pit that isn't paying off", because they would laugh their fucking asses off, as they are actively pouring even more money and resources into Torch.

The author here is a fucking idiot. And he didn't even bother to change the HTML title ("Microsoft's Github Copilot is Losing Huge Amounts of Money") from its original focus of just Github Copilot. Clickbait bullshit.

[–] [email protected] 21 points 1 year ago (2 children)

I totally agree. However, I do feel like the market around AI is inflated like NFTs and Crypto. AI isn't a bust, there will be steady progress at universities, research labs, and companies. There is too much hype right now, slapping AI on random products and over promising the current state of technology.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

slapping [Technology X] on random products and over promising the current state of technology

A tale as old as time...

Still waiting on those "self-driving" cars.

[–] [email protected] 3 points 1 year ago

Self driving will be available next year.*

*since 2014

[–] [email protected] 3 points 1 year ago

I love how suddenly companies started advertising things as AI that would have been called a chatbot a year ago. I saw a news article headlinethe other day that said that judges were going to improve the time they took to render judgments significantly by using AI.

Reading the content of the article they went on to explain that they would use it to draft the documents. Its like they never heard of templates

[–] [email protected] 1 points 1 year ago

Here is an alternative Piped link(s):

starting to use AI tools

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] [email protected] 15 points 1 year ago (1 children)

I'm still trying to transfer $100 from Kazakhstan to me here. By far the lowest fee option is actually crypto since the biggest difference is the currency conversion. If you have to convert anyway, might as well only pay 0.30% on both ends

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

Look into DJED on Cardano. It’s WAY cheaper than ETH (but perhaps not cheaper than some others). A friend of mine sent $10,000 to Thailand for less than a dollar in transaction fees. To 1bluepixel: Sounds like a use-case to me!

[–] [email protected] 5 points 1 year ago

If only I had some money to transfer somewhere :(

[–] [email protected] 0 points 1 year ago (1 children)

Layer-2 rollups for Ethereum are also way cheaper than the base layer, this page lists the major ones.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

Hmm.

You still have to deal with ETH fees just to get the funds into the roll up. I admit that ETH was revolutionary when it was invented but the insane fee market makes it a non-starter and the accounts model is just a preposterously bad (and actually irreparably broken) design decision for a decentralized network, makes Ethereum near impossible to parallelize since the main chain is required for state and the contracts that run on it are non-deterministic.

[–] [email protected] -1 points 1 year ago (1 children)

There are exchanges where you can buy Ether and other tokens directly on a layer 2, once it's on layer 2 there are no further fees to get it there.

Layer 2 rollups are a way to parallelize things, the activity on one layer 2 can proceed independently of activity on a different layer 2.

I have no idea why you think contracts on Ethereum are nondeterminstic, the blockchain wouldn't work at all if they were.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

I think that because it’s true. Smart contracts on Ethereum can fail and still charge the wallet. Because of the open ended nature of Ethereum’s design, a wallet can be empty when the contract finally executes, causing a failure. This doesn’t happen in Bitcoin and other utxo chains like Ergo, and Cardano (where all transactions must have both inputs and outputs accounted for FULLY to execute). Utxo boasts determinism while the accounts model can fail due to an empty wallet. Determinism makes concurrency harder for sure…but at least your entire chain isn’t one gigantic unsafe state machine. Ethereum literally is by definition non-deterministic.

[–] [email protected] 10 points 1 year ago (1 children)

Or computers decades before that.

Many of these advances are incredibly recent.

And also many of the things we use in our day to day are ai powered without people even realising.

[–] [email protected] 3 points 1 year ago (3 children)
[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (1 children)

https://fusionchat.ai/news/10-everyday-ai-applications-you-didnt-realize-you-use

Some good examples here.

Most social media uses it. Video and music streaming services. SatNav. Speech recognition. OCR. Grammar checks. Translations. Banks. Hospitals. Large chunks of internet infrastructure.

The list goes on.

[–] [email protected] 2 points 1 year ago

Got it. Thanks.

[–] [email protected] 7 points 1 year ago (1 children)

Automated mail sorting has been using AI to read post codes from envelopes for deacades, only back then - pre hype - it was just called Neural Networks.

That tech is almost 3 decades old.

[–] [email protected] 2 points 1 year ago (2 children)

But was it using neural networks or was it using OCR algorithms?

[–] [email protected] 2 points 1 year ago

I love people who talk about AI that don't know the difference between an LLM and a bunch of if statements

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

At the time I learned this at Uni (back in the early 90s) it was already NNs, not algorithms.

(This was maybe a decade before OCR became widespread)

In fact a coursework project I did there was recognition of handwritten numbers with a neural network. The thing was amazingly good (our implementation actually had a bug and the thing still managed to be almost 90% correct on a test data set, so it somehow mostly worked its way around the bug) and it was a small NN with no need for massive training sets (which is the main difference with Large Language Models versus the more run-off-the-mill neural networks), this at a time when algorithmic number and character recognition were considered a very difficult problem.

Back then Neural Networks (and other stuff like Genetic Algorithms) were all pretty new and using it in automated mail sorting was recent and not yet widespread.

Nowadays you have it doing stuff like face recognition, built-in on phones for phone unlocking...

[–] [email protected] 2 points 1 year ago

Very interesting. Thanks for sharing!

[–] [email protected] 6 points 1 year ago (1 children)

The key fact here is that it's not "AI" as conventionally thought of in all the scifi media we've consumed over our lifetimes, but AI in the form of a product that tech companies of the day are marketing. It's really just a complicated algorithm based off an expansive dataset, rather than something that "thinks". It can't come up with new solutions, only re-use previous ones; it wouldn't be able to take one solution for one thing and apply that to a different problem. It still needs people to steer it in the right direction, and to verify its results are even accurate. However AI is now probably better than people at identifying previous problems and remembering the solution.

So, while you could say that lots of things are "powered by AI", you can just as easily say that we don't have any real form of AI just yet.

[–] [email protected] 2 points 1 year ago (1 children)

Oh but those pattern recognition examples are about machine learning, right? Which I guess it's a form of AI.

[–] [email protected] 1 points 1 year ago (1 children)

Perhaps, but at best it's still a very basic form of AI, and maybe shouldn't even be called AI. Before things like ChatGPT, the term "AI" meant a full blown intelligence that could pass a Turing test, and a Turing test is meant to prove actual artificial thought akin to the level of human thought - something beyond following mere pre-programmed instructions. Machine learning doesn't really learn anything, it's just an algorithm that repeatedly measures and then iterates to achieve an ideal set of values for desired variables. It's very clever, but it doesn't really think.

[–] [email protected] 1 points 1 year ago (1 children)

I have to disagree with you in the machine learning definition. Sure, the machine doesn't think in those circumstances, but it's definitely learning, if we go by what you describe what they do.

Learning is a broad concept, sure. But say, if a kid is learning to draw apples, then is successful to draw apples without help in the future, we could way that the kid achieved "that ideal set of values."

[–] [email protected] 1 points 1 year ago (1 children)

Machine learning is a simpler type of AI than an LLM, like ChatGPT or AI image generators. LLM's incorporate machine learning.

In terms of learning to draw something, after a child learns to draw an apple they will reliably draw an apple every time. If AI "learns" to draw an apple it tends to come up with something subtley unrealistic, eg the apple might have multiple stalks. It fits the parameters it's learned about apples, parameters which were prescribed by its programming, but it hasn't truly understood what an apple is. Furthermore, if you applied the parameters it learned about apples to something else, it might completely fail to understand it all together.

A human being can think and interconnect its throughts much more intricately, we go beyond our basic programming and often apply knowledge learned in one thing to something completely different. Our understanding of things is much more expansive than AI. AI currently has the basic building blocks of understanding, in that it can record and recall knowledge, but it lacks the full amount of interconnections between different pieces and types of knowledge that human beings develop.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Thanks. I understood all that. But my point is that machine learning is still learning, just like machine walking is still walking. Can a human being be much better at walking than a machine? Sure. But that doesn't mean that the machine isn't walking.

Regardless, I appreciate your comment. Interesting discussion.