this post was submitted on 26 Oct 2024
1244 points (99.3% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54698 readers
354 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 191 points 4 weeks ago (1 children)

Just let anyone scrape it all for any reason. It’s science. Let it be free.

[–] [email protected] 14 points 3 weeks ago (4 children)

The OP tweet seems to be leaning pretty hard on the "AI bad" sentiment. If LLMs make academic knowledge more accessible to people that's a good thing for the same reason what Aaron Swartz was doing was a good thing.

[–] [email protected] 30 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

On the whole, maybe LLMs do make these subjects more accessible in a way that's a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

The problem is that LLMs 'hallucinate' details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it's essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they're intelligent. They're very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.

load more comments (5 replies)
[–] [email protected] 9 points 3 weeks ago

i agree, my problem is that it wont

[–] [email protected] 5 points 3 weeks ago

Except it won’t. And AI we’ll be pay to play

[–] [email protected] 5 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

That would be good if they did that but that is not the intent of the org, the purpose of the tool, the expected or even available outcome.

It's important to remember this data is not being scraped to make it available or presentable but to make a machine that echos human authography convincingly more convincingly.

On an extremely simplified level, it doesn't want to answer 1+1=? with "2", it wants to appear like a human confidently answering an arithmetic question, even if the exchange is "1+1=?" "yes, 2+3 does equal 9"

Obviously it can handle simple sums, this is an illustrative example

load more comments (2 replies)
[–] [email protected] 84 points 3 weeks ago

To paraphrase Nixon:

"When you're a company, it's not illegal."

To paraphrase Trump:

"When you're a company, they just let you do it."

[–] [email protected] 49 points 4 weeks ago
[–] [email protected] 46 points 3 weeks ago
[–] [email protected] 41 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Who writes the laws? There's your answer.

I'm curious why https://www.falconfinance.ae/ cares about this though.

The hell they are selling? https://www.falconfinance.ae/falcon-securities/

[–] [email protected] 25 points 3 weeks ago

I did some digging. It's a parody finance website that makes it seem like you can invest in falcons and make a blockchain (flockchain) with them. Dig a little further, go to the linked forum, and you'll see it's just a community of people shitposting (mostly).

[–] [email protected] 38 points 4 weeks ago (6 children)

double standards are capitalism's lifeblood

load more comments (6 replies)
[–] [email protected] 37 points 3 weeks ago (3 children)

All is legal in the eyes of capital.

[–] [email protected] 11 points 3 weeks ago

The real golden rule

load more comments (2 replies)
[–] [email protected] 33 points 4 weeks ago (1 children)

and in due time, we'll hack OpenAI and get the sources from the chat module..

I've seen a few glitches before that made ChatGPT just drop entire articles in varying languages.

[–] [email protected] 24 points 4 weeks ago (1 children)

AI models don't actually contain the text they were trained on, except in very rare circumstances when they've been overfit on a particular text (this is considered an error in training and much work has been put into coming up with ways to prevent it. It usually happens when a great many identical copies of the same data appears in the training set). An AI model is far too small for it, there's no way that data can be compressed that much.

[–] [email protected] 8 points 3 weeks ago

thanks! it actually makes much sense.

welp guess I was wrong. so back to .edu scraping!

[–] [email protected] 25 points 4 weeks ago

Anything the rich and powerful do retroactively becomes okay

[–] [email protected] 24 points 3 weeks ago

Remember what you learned in school: Working as a team to solve a test or problem is unacceptable!!! Unless you are a company town.

[–] [email protected] 21 points 4 weeks ago (19 children)
[–] [email protected] 32 points 4 weeks ago (2 children)
[–] [email protected] 36 points 4 weeks ago
[–] [email protected] 13 points 3 weeks ago

Never really was

[–] [email protected] 11 points 3 weeks ago (1 children)

A recent report estimates that they won't be profitable until 2029: https://www.businessinsider.com/openai-profit-funding-ai-microsoft-chatgpt-revenue-2024-10

A lot can happen between now and then that would cause their expenses to grow even more, for example if they need to start licensing the content they use for training.

[–] [email protected] 3 points 3 weeks ago (1 children)

On the other hand some breakthrough in either hardware or software could make AI models significantly cheaper to run and/or train. The current cost in silicon is insane and just screams that there's efficiencies to be found. As always, in a gold rush, sell pickaxes

load more comments (1 replies)
[–] [email protected] 8 points 3 weeks ago

No and AI almost never will be. However, investor money keeps coming, so it doesn't matter.

load more comments (16 replies)
[–] [email protected] 18 points 3 weeks ago

I'm still blaming the MIT for that !

[–] [email protected] 7 points 4 weeks ago

Epstein his own life

[–] [email protected] 7 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

Can we be honest about this, please?

Aaron Swartz went into a secure networking closet and left a computer there to covertly pull data from the server over many days without permission from anyone, which is absolutely not the same thing as scraping public data from the internet.

He was a hero that didn't deserve what happened, but it's patently dishonest to ignore that he was effectively breaking and entering, plus installing a data harvesting device in the server room, which any organization in the world would rightfully identity as hostile behavior. Even your local library would call the cops if you tried to do that.

[–] [email protected] 64 points 3 weeks ago

You left out the part where, instead of telling him to knock it off as soon as they learned about it and disciplining him internally as a student, the school contacted law enforcement and allowed him to continue doing it so they could prosecute him harder make an example out of him. You’d think if he was as big of a threat as you’re implying, they would stop what he was doing ASAP. And if you’re going to be pedantic about leaving out details, maybe tell the whole thing. Maybe it’s not “honest” enough if we haven’t posted the full text of a documentary in a comment. That’s clearly your call.

[–] [email protected] 26 points 3 weeks ago (4 children)

Can we be honest about this

Saying "can we be honest" isn't a magic spell that transmutes your opinion to fact.

patently dishonest ignore that he was effectively breaking and entering, plus installing a data harvesting device in the server room, which any organization in the world would rightfully identity as a hostile.

bootlicker

load more comments (4 replies)
[–] [email protected] 6 points 3 weeks ago

Why don't you speak what you truly believe instead of copy-pasting the same gaslighting everywhere? We already made you, anyway.

[–] [email protected] 5 points 3 weeks ago

Wao, it's not often we get to see someone posting a comment so full of shit while making sure to obscure many facts to see if it sticks.

"Can we be honest"? Apparently you cannot.

load more comments
view more: next ›