Fuck AI

1090 readers
255 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 5 months ago
MODERATORS
26
 
 

cross-posted from: https://lemmy.world/post/18231492

But with demand soaring and the power from dams finite, Grant County has been forced to look to other sources of energy. The problem is so acute that the county is headed for a daunting choice in the next six years: violate a state green energy law limiting the use of fossil fuels or risk rolling blackouts in homes, factories and hospitals.

...

Artificial intelligence, which requires extraordinary computing power, is accelerating the need to build data centers across the world, and experts say the industry’s global energy consumption as of just two years ago could double by 2026. Data centers also are relied upon every day by businesses and people for internet searches, storing photos on the cloud and streaming videos.

27
 
 

cross-posted from: https://awful.systems/post/2031653

Whilst going through MAIHT3K's backlog, I ended up running across a neat little article theorising on the possible aftermath which left me wondering precisely what the main "residue", so to speak, would be.

The TL;DR:

To cut a long story far too short, Alex, the writer, theorised the bubble would leave a "sticky residue" in the aftermath, "coating creative industries with a thick, sooty grime of an industry which grew expansively, without pausing to think about who would be caught in the blast radius" and killing or imperilling a lot of artists' jobs in the process - all whilst producing metric assloads of emissions and pushing humanity closer to the apocalypse.

My Thoughts

Personally, whilst I can see Alex's point, I think the main residue from this bubble is going to be large-scale resentment of the tech industry, for three main reasons:

  1. AI Is Shafting Everyone

Its not just artists who have been pissed off at AI fucking up their jobs, whether freelance or corporate - as Upwork, of all places, has noted in their research, pretty much anyone working right now is getting the shaft:

  • Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect

  • Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way

  • Seventy-one percent are burned out and nearly two-thirds (65%) report struggling with increasing employer demands

  • Women (74%) report feeling more burned out than do men (68%)

  • 1 in 3 employees say they will likely quit their jobs in the next six months because they are burned out or overworked (emphasis mine)

Baldur Bjarnason put it better than me when commenting on these results:

It’s quite unusual for a study like this on a new office tool, roughly two years after that tool—ChatGPT—exploded into people’s workplaces, to return such a resoundingly negative sentiment.

But it fits with the studies on the actual functionality of said tool: the incredibly common and hard to fix errors, the biases, the general low quality of the output, and the often stated expectation from management that it’s a magic fix for the organisational catastrophe that is the mass layoff fad.

Marketing-funded research of the kind that Upwork does usually prevents these kind of results by finessing the questions. They simply do not directly ask questions that might have answers they don’t like.

That they didn’t this time means they really, really did believe that “AI” is a magic productivity tool and weren’t prepared for even the possibility that it might be harmful.

Speaking of the general low-quality output:

  1. The AI Slop-Nami

The Internet has been flooded with AI-generated garbage. Fucking FLOODED.

Doesn't matter where you go - Google, DeviantArt, Amazon, Facebook, Etsy, Instagram, YouTube, Sports Illustrated, fucking 99% of the Internet is polluted with it.

Unsurprisingly, this utter flood of unfiltered unmitigated endless trash has sent AI's public perception straight down the fucking toilet, to the point of spawning an entire counter-movement against the fucking thing.

Whether it be Glaze and Nightshade directly sabotaging datasets, "Made with Human Intelligence" and "Not By AI" badges proudly proclaiming human-done production or Cara blowing up by offering a safe harbour from AI, its clear there's a lot of people out there who want abso-fucking-lutely nothing to do with AI in any sense of the word as a result of this slop-nami.

  1. The Monstrous Assholes In AI

On top of this little slop-nami, those leading the charge of this bubble have been generally godawful human beings. Here's a quick highlight reel:

I'm definitely missing a lot, but I think this sampler gives you a good gist of the kind of soulless ghouls who have been forcing this entire fucking AI bubble upon us all.

Eau de Tech Asshole

There are many things I can't say for sure about the AI bubble - when it will burst, how long and harsh the next AI/tech winter will be, what new tech bubble will pop up in its place (if any), etcetera.

One thing I feel I can say for sure, however, is that the AI bubble and its myriad harms will leave a lasting stigma on the tech industry once it finally bursts.

Already, it seems AI has a pretty hefty stigma around it - as Baldur Bjaranason noted when talking about when discussing AI's sentiment disconnect between tech and the public:

To many, “AI” seems to have become a tech asshole signifier: the “tech asshole” is a person who works in tech, only cares about bullshit tech trends, and doesn’t care about the larger consequences of their work or their industry. Or, even worse, aspires to become a person who gets rich from working in a harmful industry.

For example, my sister helps manage a book store as a day job. They hire a lot of teenagers as summer employees and at least those teens use “he’s a big fan of AI” as a red flag. (Obviously a book store is a biased sample. The ones that seek out a book store summer job are generally going to be good kids.)

I don’t think I’ve experienced a sentiment disconnect this massive in tech before, even during the dot-com bubble.

On another front, there's the cultural reevaluation of the Luddites - once brushed off as naught but rejectors of progress, they are now coming to be viewed as folk heroes in a sense, fighting against misuse of technology to disempower and oppress, rather than technology as a whole.

There's also the rather recent SAG-AFTRA strike which kicked off just under a year after the previous one, and was started for similar reasons - to protect those working in the games industry from being shafted by AI like so many other people.

With how the tech industry was responsible for creating this bubble at every stage - research, development, deployment, the whole nine yards - it is all but guaranteed they will shoulder the blame for all that its unleashed. Whatever happens after this bubble, I expect hefty scrutiny and distrust of the tech industry for a long, long time after this.

To quote @datarama, "the AI industry has made tech synonymous with “monstrous assholes” in a non-trivial chunk of public consciousness" - and that chunk is not going to forget any time soon.

28
 
 

One million Blackwell GPUs would suck down an astonishing 1.875 gigawatts of power. For context, a typical nuclear power plant only produces 1 gigawatt of power.

Fossil fuel-burning plants, whether that's natural gas, coal, or oil, produce even less. There's no way to ramp up nuclear capacity in the time it will take to supply these millions of chips, so much, if not all, of that extra power demand is going to come from carbon-emitting sources.

29
30
 
 

Bosses expect artificial intelligence software to improve productivity, but workers say the tool does the opposite, according to a survey by find-a-workplace research org the Upwork Research Institute, a limb of talent-finding platform Upwork.

The survey elicited responses from 2,500 workers across the US, UK, Australia, and Canada. Half of respondents were C-suite execs, a quarter worked full time and the remained were freelancers. Respondents represent different age groups and genders, but all were required to have completed high school and to use a computer for their work at least “sometimes.”

Findings include that C-suite executives are asking more of workers – 81 percent of 1,250 executive respondents acknowledge as much, according to the survey.

31
 
 

Archive

Video games—and the people who make them—are in trouble. An estimated 10,500 people in the industry were laid off in 2023 alone. This year, layoffs in the nearly $200 billion sector have only gotten worse, with studios axing what is believed to be 11,000 more, and counting. Microsoft, home of the Xbox and parent company to several studios, including Activision Blizzard, shuttered Tango Gameworks and Alpha Dog Games in May. All the while, generative AI systems built by OpenAI and its competitors have been seeping into nearly every industry, dismantling whole careers along the way.

But gaming might be the biggest industry AI stands poised to conquer. Its economic might has long since eclipsed Hollywood's, while its workforce remains mostly nonunion. A recent survey from the organizers of the Game Developers Conference found that 49 percent of the survey’s more than 3,000 respondents said their workplace used AI, and four out of five said they had ethical concerns about its use.

32
 
 

Apparently their regular algorithm wasn't getting people as addicted as they want.

33
 
 

cross-posted from: https://lemmy.ml/post/18310802

Fun to see him (kmac2021) making shit again

34
35
 
 
36
37
38
 
 

Companies are going all-in on artificial intelligence right now, investing millions or even billions into the area while slapping the AI initialism on their products, even when doing so seems strange and pointless.

Heavy investment and increasingly powerful hardware tend to mean more expensive products. To discover if people would be willing to pay extra for hardware with AI capabilities, the question was asked on the TechPowerUp forums.

The results show that over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn't know, while just under 2,000 voters said yes.

39
 
 

Jaws 3 might be the worst 4K ever released as of this very moment. It's that bad and that horribly butchered with AI and awful DNR/Color Timing. The 2D version always looked a bit off due to the way that the film was shot specifically for 3D but with this abysmal 4K transfer it's limitations and issues are blown up and become glaringly obvious. Then on top of that you have AI interperetation the likes not even god has seen.

The film legitimately looks like it was created with Midjourney on more than one occasion. Or entire frames look like the washed out color tone SNL bumpers. Remember those from the 70's and 80's that used to show the host for the night? Entire sequences of the film look like that. No film should look like that in 4K. People look like paper cut outs in more than one frame. This is abhorrent.

Vindicator89 on Blu-ray.com

I said this in another comment thread but will post here too:

The bigger question is why is the upscale suddenly so much worse than it was before?

Plenty of films finished at 2K had 4K UHD discs put out that were nothing more than upscales with HDR grades applied, but they were never this bad. It's like AI upscales became a thing and the studios tossed out whatever previous methods they used, that seemingly worked JUST FINE, in favour of new technology that has GLARING flaws such as this.

/u/adamschoales on Reddit

40
 
 

Some of the world's wealthiest companies, including Apple and Nvidia, are among countless parties who allegedly trained their AI using scraped YouTube videos as training data. The YouTube transcripts were reportedly accumulated through means that violate YouTube's Terms of Service and have some creators seeing red. The news was first discovered in a joint investigation by Proof News and Wired.

While major AI companies and producers often keep their AI training data secret, heavyweights like Apple, Nvidia, and Salesforce have revealed their use of "The Pile", an 800GB training dataset created by EleutherAI, and the YouTube Subtitles dataset within it. The YouTube Subtitles training data is made up of 173,536 YouTube plaintext transcripts scraped from the site, including 12,000+ videos which have been removed since the dataset's creation in 2020.

Affected parties whose work was purportedly scraped for the training data include education channels like Crash Course (1,862 videos taken for training) and Philosophy Tube (146 videos taken), YouTube megastars like MrBeast (two videos) and Pewdiepie (337 videos), and TechTubers like Marques Brownlee (seven videos) and Linus Tech Tips (90 videos). Proof News created a tool you can use to survey the entirety of the YouTube videos allegedly used without consent.

41
35
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 
 

Manual laborers should unionize and start demanding 80K per year with benefits

Archive

42
 
 

Not the Goldman-Sachs paper, the analysis of it. It's really worth the read.

43
 
 
44
 
 

As if beauty pageants with humans weren't awful enough. Let's celebrate simulated women with beauty standards too unrealistic for any real women to live up to!

45
 
 

As part of the wider tech industry's wider push for AI, whether we want it or not, it seems that Google's Gemini AI service is now reading private Drive documents without express user permission, per a report from Kevin Bankster on Twitter embedded below. While Bankster goes on to discuss reasons why this may be glitched for users like him in particular, the utter lack of control being given over his sensitive, private information is unacceptable for a company of Google's stature —and does not bode well for future privacy concerns amongst AI's often-forced rollout.

46
47
 
 

OpenAI is partnering with Los Alamos National Laboratory to study how artificial intelligence can be used to fight against biological threats that could be created by non-experts using AI tools, according to announcements Wednesday by both organizations. The Los Alamos lab, first established in New Mexico during World War II to develop the atomic bomb, called the effort a “first of its kind” study on AI biosecurity and the ways that AI can be used in a lab setting.

The difference between the two statements released Wednesday by OpenAI and the Los Alamos lab is pretty striking. OpenAI’s statement tries to paint the partnership as simply a study on how AI “can be used safely by scientists in laboratory settings to advance bioscientific research.” And yet the Los Alamos lab puts much more emphasis on the fact that previous research “found that ChatGPT-4 provided a mild uplift in providing information that could lead to the creation of biological threats.”

Much of the public discussion around threats posed by AI has centered around the creation of a self-aware entity that could conceivably develop a mind of its own and harm humanity in some way. Some worry that achieving AGI—advanced general intelligence, where the AI can perform advanced reasoning and logic rather than acting as a fancy auto-complete word generator—may lead to a Skynet-style situation. And while many AI boosters like Elon Musk and OpenAI CEO Sam Altman have leaned into this characterization, it appears the more urgent threat to address is making sure people don’t use tools like ChatGPT to create bioweapons.

“AI-enabled biological threats could pose a significant risk, but existing work has not assessed how multimodal, frontier models could lower the barrier of entry for non-experts to create a biological threat,” Los Alamos lab said in a statement published on its website.

The different positioning of messages from the two organizations likely comes down to the fact that OpenAI could be uncomfortable with acknowledging the national security implications of highlighting that its product could be used by terrorists. To put an even finer point on it, the Los Alamos statement uses the terms “threat” or “threats” five times, while the OpenAI statement uses it just once.

48
 
 

I do not recommend reading this article on a full stomach.

49
50
 
 

Generative AI is the nuclear bomb of the information age

view more: ‹ prev next ›