this post was submitted on 17 Jun 2023
1084 points (98.9% liked)

Lemmy.World Announcements

29099 readers
4 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world/

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to [email protected] e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
 

CEO Steve Huffman says tech giants should not be able to trawl Reddit’s huge store of data for free. But that information came from users, not the company

That β€œcorpus of data” is the content posted by millions of Reddit users over the decades. It is a fascinating and valuable record of what they were thinking and obsessing about. Not the tiniest fraction of it was created by Huffman, his fellow executives or shareholders. It can only be seen as belonging to them because of whatever skewed β€œconsent” agreement its credulous users felt obliged to click on before they could use the service.

Ouch

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 56 points 1 year ago (4 children)

Wide op for ai scraping and nothing are not the only two options. They could easily limit api calls to what would be good for single users or mods and have each user generate their own key. Apps could let users input their key. Most users wouldn't bother and would switch to their app anyway so it would get them 95% or what they claim to want without being a dick about it.

[–] [email protected] 44 points 1 year ago (2 children)

Plus AI companies can just scrape reddit without using the API. It's still a website after all.

[–] [email protected] 13 points 1 year ago (1 children)

They want the timing of how long a user looks at something. They can't scrape that from third party apps.

[–] [email protected] 2 points 1 year ago

Yes you can. PC emulation of apps is common.

[–] [email protected] 5 points 1 year ago (2 children)

I'm not sure if I wasted my time, but I spent a few hours today editing all of my posts on Reddit to be a single comma or period. I didn't comment or post a lot by any means, but just got irritated enough to try to keep from contributing in any way to Spez profiting off of user provided content.

[–] [email protected] 2 points 1 year ago

Can't shreddit do this in bulk? I am considering doing it for my comments, but I think I will just leave them up there. I did have a great time on reddit until they announced their API changes, so I will leave them with that much. But I did get a backup of everything I wrote using bulk downloader.

But I am still considering just doing a shreddit just for kicks.

[–] [email protected] 1 points 1 year ago (1 children)

Yeah, I did the same thing a few days ago. I used the browser add-on called Reddit Enhancement Suite to delete all my posts and comments. Instructions: https://www.alphr.com/how-to-delete-all-reddit-posts/

[–] [email protected] 2 points 1 year ago

so sad. Not opposing but like burning a forest.

[–] [email protected] 2 points 1 year ago (1 children)

Honestly, I think the sad truth is that reddit is bleeding money, and every action they take from here on out will be about recruiting whales and driving off everyone else. That's steve's brilliant business strategy - make reddit p2w.

[–] [email protected] 1 points 1 year ago

where is the money going?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

that's how they did it. They put a 10 request a minute on bots and a higher oauth limit (100) for individuals. large User client type apps could have somewhat easily converted over to that system but due to time constraint they didn't. I do think they extorted their third party devs sure but, honestly the individual user limit isn't super unreasonable as long as you aren't liking or disliking every post. the search api is 100 posts per Api request, it was more the no NSFW and the no advertising limits they put on it that sucked

edit: its actually 10 or 100 per minute not hour

[–] [email protected] 1 points 1 year ago (1 children)

It's not that simple, because the third party apps ship with a single api key. So I used Relay for reddit, and used the same api key as everyone else on that app. You could create an app, and then have everyone make their own key, but that is just asking for trouble. Definitely too technical for most people, and you would probably need to put in billing info for a scenario where you go above the free-tier call limit.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Yeah but if you're going to use the oauth 2 method you don't use the same API key as everyone, how that works is you authorize your account with the bot, the company gives you a bearer token and then that token is what's used for rate limits. The Bot client token is not used in that process, the oauth2 bearer token is

this is taken from the reddit Api docs: As of July 1, 2023, we will enforce two different rate limits for those eligible for free access usage of our Data API. The limits are:

If you are using OAuth for authentication: 100 queries per minute (QPM) per OAuth client ID
If you are not using OAuth for authentication: 10 QPM

so apperently I undershot it, it's actually 100 requests per minute not per hour like I originally thought it was