Lemmy.zip

2,802 readers
208 users here now

Welcome to Lemmy.zip - a community for like minded people to come and have a chat about almost anything. From games to tech, to anything else, come and have a chat.

If you're new and would like to join Lemmy.zip, please fill in the sign up form. Email verification is required. (Please check your spam folder!)

Once you're signed up, come and introduce yourself in our Home community!


Useful Links


Instance Rules

To maintain the high standard of discourse and interaction we all value, each user must adhere to the guidelines outlined in our Code of Conduct. This set of rules is designed not just to maintain order but also to ensure a safe and inclusive environment for everyone to share their thoughts and ideas.

What to Expect in Our Code of Conduct:

If you enjoy reading legal stuff, you can check out legal.lemmy.zip.


Funding

If you would like to contribute to the upkeep of Lemmy.zip, please head over to OpenCollective.
Anything you're happy to donate is very highly appreciated!
You'll even get your name in the Thank You thread.

Open Collective backers

If you want to use PayPal, you can donate via Ko-Fi:


Server

Uptime


founded 1 year ago
ADMINS
1
 
 

New research visualizes the political bias of all major AI language models:

-OpenAI’s ChatGPT and GPT-4 were identified as most left-wing libertarian.

-Meta’s LLaMA was found to be the most right-wing authoritarian.

Models were asked about various topics (e.g., feminism, democracy) and then plotted on a political compass.

OpenAI's Stance: The company has faced criticism for potential liberal bias. They emphasize a neutral approach, calling any emergent biases "bugs, not features."

PhD Researcher's Opinion: Chan Park believes no language model can be free from political biases.

How Models Acquire Bias: Researchers examined three stages of model development. Initially, models were queried with politically sensitive statements to identify biases. BERT models (from Google) showed more social conservatism than OpenAI's GPT models. The paper speculates this might be due to BERT's training on more conservative books, while newer GPT models trained on liberal internet texts. Meta clarified steps taken to reduce bias in its LLaMA model. (Google did not comment)

Training actually amplified existing biases: left-leaning models became more left-leaning, and vice versa. Political orientation of training data influenced models' detection of "hate speech and misinformation."

The transparency issue: Tech companies don’t typically share details of training data/methods.

Should they be required to make the training data public?

Bottom line is if AI ends up disseminating a large portion of the total information exchange with humans, it can steer opinions. We can't completely eliminate bias, but one should be aware that it exists.

https://twitter.com/AiBreakfast/status/1688939983468453888?s=20

2
 
 

From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models https://aclanthology.org/2023.acl-long.656.pdf

view more: next ›