this post was submitted on 01 Aug 2023
188 points (83.3% liked)

Technology

58134 readers
4382 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 1 year ago (15 children)

These biases have always existed in the training data used for ML models (society and all that influencing the data we collect and the inherent biases that are latent within), but it’s definitely interesting that generative models now make these biases much much more visible (figuratively and literally with image models) to the lay person

[–] [email protected] 1 points 1 year ago (14 children)

But they know the AI's have these biases, at least now, shouldn't they be able to code them out or lessen them? Or would that just create more problems?

Sorry, I'm no programer so I have no idea if thats even possible or not. Just sounds possible in my head.

[–] [email protected] 11 points 1 year ago (6 children)

You don't really program them, they learn from the data provided. If say you want a model that generates faces, and you provide it with say, 500 faces, 470 of which are of black women, when you ask it to generate a face, it'll most likely generate a face of a black woman.

The models are essentially maps of probability, you give it a prompt, and ask it what the most likely output is given said prompt.

If she had used a model trained to generate pornography, it would've likely given her something more pornographic, if not outright explicit.


You've also kind of touched on a point of problem with large language models; they're not programmed, but rather prompted.

When it comes to Bing Chat, Chat GPT and others, they have additional AI agents sitting alongside them to help filter/mark out problematic content both provided by the user, as well as possible problematic content the LLM itself generates. Like this prompt, the model marked my content as problematic and the bot gives me a canned response, "Hi, I'm bing. Sorry, can't help you with this. Have a nice day. :)"

These filters are very crude, but are necessary because of problems inherent in the source data the model was trained on. See, if you crawl the internet for data to train it on, you're bound to bump into all sorts of good information; Wikipedia articles, Q&A forums, recipe blogs, personal blogs, fanfiction sites, etc. Enough of this data will give you a well rounded model capable of generating believable content across a wide range of topics. However, you can't feasibly filter the entire internet, among all of this you'll find hate speech, you'll find blogs run by neo nazis and conspiracy theorists, you'll find blogs where people talk about their depression, suicide notes, misogyny, racism, and all sorts of depressing, disgusting, evil, and dark aspects of humanity.

Thus there's no code you can change to fix racism.

if (bot.response == racist) 
{
    dont();
}

But rather simple measures that read the user/agent interaction, filtering it for possible bad words, or likely using another AI model to gauge the probability of an interaction being negative,

if (interaction.weightedResult < negative)
{
    return "I'm sorry, but I can't help you with this at the moment. I'm still learning though. Try asking me something else instead! 😊";
}

As an aside, if she'd prompted "proffesional Asian woman" it likely would've done a better job. Depending on how much "creative license" she gives the model though, it still won't give her her own face back. I get the idea of what she's trying to do, and there's certainly ways of acheiving it, but she likely wasn't using a product/model weighted to do specifically the thing she was asking to do.

[–] [email protected] 4 points 1 year ago (1 children)

The Hands/digits...the horror....

[–] [email protected] 1 points 1 year ago

It's clearly biased against fingered folks!

load more comments (4 replies)
load more comments (11 replies)
load more comments (11 replies)