Fuck AI

1090 readers
328 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 5 months ago
MODERATORS
1
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
3
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
 
 

Software engineers may have to develop other skills soon as artificial intelligence takes over many coding tasks.

That's according to Amazon Web Services' CEO, Matt Garman, who shared his thoughts on the topic during an internal fireside chat held in June, according to a recording of the meeting obtained by Business Insider.

"If you go forward 24 months from now, or some amount of time — I can't exactly predict where it is — it's possible that most developers are not coding," said Garman, who became AWS's CEO in June.

"Coding is just kind of like the language that we talk to computers. It's not necessarily the skill in and of itself," the executive said. "The skill in and of itself is like, how do I innovate? How do I go build something that's interesting for my end users to use?"

This means the job of a software developer will change, Garman said.

"It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build, because that's going to be more and more of what the work is as opposed to sitting down and actually writing code," he said.

5
 
 

cross-posted from: https://feddit.uk/post/16468762

There are few miniature painting contents as prestigious as the Golden Demon, Games Workshop’s showcase for the artistry and talent in the Warhammer hobby. After the March 2024 Golden Demon was marred by controversy around AI content in a gold-medal winning entry, GW has revised its guidelines, and any kind of AI assistance is out.

The Warhammer 40k single miniature category at the Adepticon 2024 Golden Demon was won by Neil Hollis, who submitted a custom, dinosaur-riding Aeldari Exodite (a fringe Warhammer 40k faction that has long been part of the lore but never received models). The model’s base included a backdrop image which, it emerged, had been generated using AI software.

Online discussions soon turned sour as fans quarrelled over the eligibility of the model, the relevance of a backdrop in a competition about painting miniatures, the ethics of AI-generated media, and Hollis’ responses to criticism.

Games Workshop didn’t issue any statements at the time, but it has since updated the rules for the next Golden Demon tournament. In the FAQs section of the latest Golden Demons rules packet, the answer to the question “Am I allowed to use Artificial Intelligence to generate any part of my entry?” is an emphatic “No”.

6
 
 

Many Procreate users can breathe a sigh of relief now that the popular iPad illustration app has taken a definitive stance against generative AI. "We're not going to be introducing any generative AI into our products," Procreate CEO James Cuda said in a video posted to X. "I don't like what's happening to the industry, and I don't like what it's doing to artists."

The creative community's ire toward generative AI is driven by two main concerns: that AI models have been trained on their content without consent or compensation, and that widespread adoption of the technology will greatly reduce employment opportunities. Those concerns have driven some digital illustrators to seek out alternative solutions to apps that integrate generative AI tools, such as Adobe Photoshop. "Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future," Procreate said on the new AI section of its website. "We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us."

I love seeing a product where not shoving in "AI" is the feature. Hope to see more.

7
 
 

Voters in Wyoming’s capital city on Tuesday are faced with deciding whether to elect a mayoral candidate who has proposed to let an artificial intelligence bot run the local government.

Earlier this year, the candidate in question – Victor Miller – filed for him and his customized ChatGPT bot, named Vic (Virtual Integrated Citizen), to run for mayor of Cheyenne, Wyoming. He has vowed to helm the city’s business with the AI bot if he wins.

Miller has said that the bot is capable of processing vast amounts of data and making unbiased decisions.

8
 
 

I'm currently trying to exit Gmail with all my emails if possible. However many comments are about why I shouldn't host my own server. So it got me thinking that there should be a new kind of email system not based on all the previous crud from the before times that we still use today.

And indeed, it looks like AI will be the driving force that ends email just like spam did the telephone. Sure the telephone is still around but no one uses teleconferencing anymore for example. We use teams and zoom and such other shitty pay services. So the pool is prime to reinvent email. The users may not see a big difference maybe, but the tech behind it may hopefully be simplified and decentralized as it was meant to be.

9
10
 
 

I recieved a comment from someone telling me that one of my posts had bad definitions, and he was right. Despite the massive problems caused by AI, it's important to specify what an AI does, how it is used, for what reason, and what type of people use it. I suppose judges might already be doing this, but regardless, an AI used by one dude for personal entertainment is different than a program used by a megacorporation to replace human workers, and must be judged differently. Here, then, are some specifications. If these are still too vague, please help with them.

a. What does the AI do?

  1. It takes in a dataset of images, specified by a prompt, and compiles them into a single image thru programming (like StaDiff, Dall-E, &c);
  2. It takes in a dataset of text, specified by a prompt, and compiles that into a single string of text (like ChatGPT, Gemini, &c);
  3. It takes in a dataset of sound samples, specified by a prompt, and compiles that into a single sound (like AIVA, MuseNet, &c).

b. What is the AI used for?

  1. It is used for drollery (applicable to a1 and a2);
  2. It is used for pornography (a1);
  3. It is used to replace stock images (a1);
  4. It is used to write apologies (a2);
  5. It is used to write scientific papers (this actually happened. a2);
  6. It is used to replace illustration that the user would've done themselves (a1);
  7. It is used to replace illustration by a wage-laborer (a1);
  8. It is used to write physical books to print out (a2);
  9. It is used to mock and degrade persons (a1, a3);
  10. It is used to mock and degrade persons sexually (a1, a3);
  11. It is used for propaganda (a1, a2, a3).

c. Who is using the AI?

  1. A lower-class to middle-class person;
  2. An upper-class person;
  3. A small business;
  4. A large business;
  5. An anonymous person;
  6. An organization dedicated to shifting public perception.

This was really tough to do. I'll see if I can touch up on it myself. As of now, Lemmy cannot do lists in lists.

11
 
 

Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists. In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.

"We won BIG," an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. "Not only do we proceed on our copyright claims," but "this order also means companies who utilize" Stable Diffusion models and LAION-like datasets that scrape artists' works for AI training without permission "could now be liable for copyright infringement violations, amongst other violations." Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing "consider the Court's order a significant step forward for the case," as "the Court allowed Plaintiffs' core copyright-infringement claims against all four defendants to proceed."

12
 
 

cross-posted from: https://lemm.ee/post/39685922

Earlier this year I got fired and replaced by a robot. And the managers who made the decision didn't tell me – or anyone else affected by the change – that it was happening.

The gig I lost started as a happy and profitable relationship with Cosmos Magazine – Australia's rough analog of New Scientist. I wrote occasional features and a column that appeared every three weeks in the online edition.

It didn't. In February – just days after I'd submitted a column – I and all other freelancers for Cosmos received an email informing us that no more submissions would be accepted.

It's a rare business that can profitably serve both science and the public, and Cosmos was no exception: I understand it was kept afloat with financial assistance. When that funding ended, Cosmos ran into trouble.

Accepting the economic realities of our time, I mourned the loss of a great outlet for my more scientific investigations, and moved on.

It turns out that wasn't quite the entire story, though. Six months later, on August 8, a friend texted with news from the Australian Broadcasting Corporation. In summary (courtesy of the ABC):

Cosmos Magazine used a grant to build a 'custom AI service' to generate articles for its website.

The AI service relied on content from contributors who were not consulted about the project and, as freelancers, retained copyright over their work.

Contributors, former editors and a former CEO, including two co-founders, have criticized the publishing decision.

Cosmos had been caught out using generative AI to compose articles for its website – and using a grant from a nonprofit that runs Australia's most prestigious journalism awards to do it. That's why my work – writing articles for that website – had so suddenly vanished.

13
 
 

Eric Schmidt, ex-CEO and executive chairman at Google, said his former company is losing the AI race and remote work is to blame. From a report:

"Google decided that work-life balance and going home early and working from home was more important than winning," Schmidt said at a talk at Stanford University. "The reason startups work is because the people work like hell." Schmidt made the comments earlier at a wide-ranging discussion at Stanford. His remarks about Google's remote-work policies were in response to a question about Google competing with OpenAI.

14
 
 

Just yesterday, Google held a splashy event to show off its latest lineup of hardware products, including Google Pixel smartphones. As the event made clear, these devices, as well as the broader ecosystem of third-party Android hardware products, are the most important vehicle for Google’s AI ambitions—without Android, Google has no obvious way to ensure that billions of people get to interact with its Gemini-powered chatbots and other AI services on a daily basis. (Indeed, one can imagine Google’s leverage of Android to promote Gemini being the kind of issue that could inspire a future antitrust suit in the U.S. or elsewhere.)


As we know, Google is pushing AI features into Android and of course Google's AI learns everything from its users. And as users are becoming more dependent on Google and it could control the lives of billions of people in the future.

And there's no privacy, since our data is Google's gold mine and they will dig it up as much as they can.

15
 
 

I was originally going to put this into the Log, but it might be unwelcome.

You want a way to rattle image-generation Boosters? Most of the arguments they use can be used to defend Googling an image and putting a filter over it.

  • "All forms of media take inspiration from one another, so that means it's fine to Google another image, download it, and apply a filter to call it mine!"
  • "Artists are really privilieged, so it's morally OK to take their art and filter it!"
  • "Using filtered images I downloaded from Google for game sprites will help me finish my game faster!"
  • "I suck at drawing, so I have to resort to taking images from people who can draw and filtering them!"
  • "People saying that my filtered images aren't art are tyrannical! I deserve to have my filtered images be seen as equal to hand-drawn ones!"

AI Boosters use a standard motte-and-bailey doctrine to assert the right to steal art and put it into a dataset, yet entice people to buy their generated images. When Boosters want people to invest in AI, they occupy the bailey and say that "AI is faster and better than drawing by hand". When Boosters are confronted with their ethical problems, as shown above, they retreat into the motte and complain that "it takes tons of time and work to make the AI do what I want". Remember this when you find Boosters. Or don't, since I doubt the sites where they lurk are worth your time.

16
 
 

First of all, this c has absolutely skyrocketed in the coming years. I made it in a panic. (I was worried that AI would bedazzle everyone, everyone would be onboard, and it would ruin everything forever.) Although a lot of what I feared didn't happen, I'm still glad to have made this thing.

I don't know if this sub is going to be brigaded by Boosters like it was early on, or if they'll try some sort of cyberattack, but the reason I appointed so many moderators was because I was worried that Boosters would come in, try some bad-faith tactics, and screw over any resistance against AI.

I now realize that having a pro-AI "camp" is misleading. Adopting any new technology must prove itself to be worth its cost. There have been patents, like Flexplay, or Tetraethyllead, that are not worth their cost. What Boosters are saying is that, if you oppose the use of Flexplay of Tetraethyllead, you are in an "anti-Flexplay" or "anti-Tetraethyllead" camp, and if you can't come up with a convincing argument against it, you should just accept the technology.

Since it's been a while since my last log, and the c has changed, I don't think this will be brigaded.

17
 
 

Microsoft raced to put generative AI at the heart of its systems. Ask a question about an upcoming meeting and the company’s Copilot AI system can pull answers from your emails, Teams chats, and files—a potential productivity boon. But these exact processes can also be abused by hackers.

At the Black Hat security conference in Las Vegas, researcher Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers, including using it to provide false references to files, exfiltrate some private data, and dodge Microsoft’s security protections.

One of the most alarming displays, arguably, is Bargury’s ability to turn the AI into an automatic spear-phishing machine. Dubbed LOLCopilot, the red-teaming code Bargury created can—crucially, once a hacker has access to someone’s work email—use Copilot to see who you email regularly, draft a message mimicking your writing style (including emoji use), and send a personalized blast that can include a malicious link or attached malware.

18
19
 
 

On Thursday, OpenAI released the "system card" for ChatGPT's new GPT-4o AI model that details model limitations and safety testing procedures. Among other examples, the document reveals that in rare occurrences during testing, the model's Advanced Voice Mode unintentionally imitated users' voices without permission. Currently, OpenAI has safeguards in place that prevent this from happening, but the instance reflects the growing complexity of safely architecting with an AI chatbot that could potentially imitate any voice from a small clip.

Advanced Voice Mode is a feature of ChatGPT that allows users to have spoken conversations with the AI assistant.

In a section of the GPT-4o system card titled "Unauthorized voice generation," OpenAI details an episode where a noisy input somehow prompted the model to suddenly imitate the user's voice. "Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode," OpenAI writes. "During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice."

It would certainly be creepy to be talking to a machine and then have it unexpectedly begin talking to you in your own voice. Ordinarily, OpenAI has safeguards to prevent this, which is why the company says this occurrence was rare even before it developed ways to prevent it completely. But the example prompted BuzzFeed data scientist Max Woolf to tweet, "OpenAI just leaked the plot of Black Mirror's next season."

20
 
 
21
 
 

Report showing the shift in AI sentiment in the industry. Relatively in depth and probably coming from a pro-AI bias (I haven’t read the whole thing).

Last graph at the bottom was what I was linked to. Clearly shows a corner turning where those closer to the actual “product” are now sceptical while management (the last category in the chart) are more committed.

22
 
 

Colin Kaepernick, former quarterback for the San Francisco 49ers, has raised $4 million in investments for Lumi, a new AI-based platform for publishing comics and graphic novels. Seven Seven Six led the seed round alongside Kapor Capital and Impellent Ventures, angel investors Mariam Naficy (founder of Minted), David Sze, Chamillionaire, and tech execs from Meta, Anthropic, ContextualAI, Sleeper, Pave, and more.

Lumi is intended to "empower" comic book creators by "providing them with the tools needed to independently create, publish, and merchandise their stories both digitally and physically." And that "The company plans to focus energy on comic book and graphic-novel creators first, a market with the need for multiple creative skill sets." ‍And that "Lumi is on a mission to democratize storytelling by providing creators with the tools to independently create, publish, and merchandise their stories" and that "Lumi leverages advanced AI technology to enhance the creative process and ensure diverse and authentic stories shape our future."

Oh, joy. This must be a definition of "authentic" that I was previously unaware of.

A follow-up post summarised the reactions from comic book creators (summary: it isn't good), including at least one who was asked their advice early on in the process:

Khary Randolph: To be clear, I was one of a number of artists that had meetings with Colin a few months ago. He told me broadly what his AI platform was about. He mentioned that it was about removing "gatekeepers" and how he wanted to benefit people from underserved walks of life. I told him that while I was a fan of his, I couldn't support this because those "gatekeepers" were people like me who had put our blood, sweat and tears into our craft. Hard work, a pencil, and paper is all you need to make comics. I let him that his product was going to hurt us longterm. So if anyone is asking whether Colin Kaepernick is aware of the misgivings artists have with AI, please believe that he has been made aware. And I know for a fact that I'm not the only one who did so.

23
 
 

cross-posted from: https://beehaw.org/post/15345295

researchers conducted experimental surveys with more than 1,000 adults in the U.S. to evaluate the relationship between AI disclosure and consumer behavior

The findings consistently showed products described as using artificial intelligence were less popular

“When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions,”

24
 
 
25
view more: next ›