1
59
submitted 2 hours ago by [email protected] to c/[email protected]
2
7
submitted 3 hours ago by [email protected] to c/[email protected]

cross-posted from: https://feddit.org/post/390314

"We encourage you to consider, beyond the state subsidies, other reasons leading Chinese EVs to be sold at prices below market in the EU," Philippe Dam, EU Advocacy Director at Human Rights Watch (HRW), writes in an open letter to the European Commission.

Refering to the EU's ongoing consultations with Beijing regarding tariffs on Electric Vehicles (EVs), HRW asks the Commission to "urge the Chinese government to end crimes against humanity against Uyghurs and Turkic Muslims in Xinjiang and elsewhere and implement the recommendations of the August 2022 OHCHR report on Xinjiang".

HRW demands three points:

  • Release everyone who remains arbitrarily detained or imprisoned

  • Investigate and appropriately prosecute government officials implicated in serious violations of human rights and crimes against humanity

  • Grant free and unfettered access to Xinjiang to independent monitors, as requested by the UN High Commissioner for Human Rights and several UN Special Procedures

The rights groups also calls to ensure coherence with the pending Forced Labor Regulation, which enables the European Commission and EU member states to take steps to block entry into the EU market for products made with forced labor.

3
47
submitted 14 hours ago by [email protected] to c/[email protected]

cross-posted from: https://feddit.org/post/375357

cross-posted from: https://feddit.org/post/373442

Archived link

Here is the report (pdf).

Serbian authorities have adopted invasive surveillance practices and facial recognition technology to monitor political opponents, civic activists and critical journalists, says a BIRN report entitled ‘Digital Surveillance in Serbia – A Threat to Human Rights?’, published on Friday.

Equipment from Chinese manufacturers, such as Dahua and Hickvision, predominates.

Serbia’s aspirations for EU membership mean that it faces pressure to adhere to EU standards on data protection and privacy as well as cybersecurity. However, Serbia has simultaneously strengthened ties with authoritarian countries, especially China and Russia.

4
43
submitted 1 day ago by [email protected] to c/[email protected]

cross-posted from: https://feddit.org/post/352534

- China implemented new regulations on Monday under its toughened counterespionage law, which enables authorities to inspect smartphones, personal computers and other electronic devices, raising fears among expatriates and foreign businesspeople about possible arbitrary enforcement.

- A Japanese travel agency official said the new regulations could further prevent tourists from coming to China. Some Japanese companies have told their employees not to bring smartphones from Japan when they make business trips to the neighboring country, according to officials from the companies.

The new rules, which came into effect one year after the revised anti-espionage law expanded the definition of espionage activities, empower Chinese national security authorities to inspect data, including emails, pictures, and videos stored on electronic devices.

Such inspections can be conducted without warrants in emergencies. If officers are unable to examine electronic devices on-site, they are authorized to have those items brought to designated places, according to the regulations.

It remains unclear what qualifies as emergencies under the new rules. Foreign individuals and businesses are now expected to face increased surveillance by Chinese authorities as a result of these regulations.

A 33-year-old British teacher told Kyodo News at a Beijing airport Monday that she refrains from using smartphones for communications. A Japanese man in his 40s who visited the Chinese capital for a business trip said he will "try to avoid attracting attention" from security authorities in the country.

In June, China's State Security Ministry said the new regulations will target "individuals and organizations related to spy groups," and ordinary passengers will not have their smartphones inspected at airports. However, a diplomatic source in Beijing noted that authorities' explanations have not sufficiently clarified what qualifies as spying activities.

Last week, Taiwan's Mainland Affairs Council upgraded its travel warning for mainland China, advising against unnecessary trips due to Beijing's recent tightening of regulations aimed at safeguarding national security.

In May, China implemented a revised law on safeguarding state secrets, which includes measures to enhance the management of secrets at military facilities.

5
59
submitted 1 day ago by [email protected] to c/[email protected]

cross-posted from: https://feddit.org/post/341702

Once upon a time, newly minted graduates dreamt of creating online social media that would bring people closer together.

That dream is now all but a distant memory. In 2024, there aren’t many ills social networks don’t stand accused of: the platforms are singled out for spreading “fake news”, for serving as Russian and Chinese vehicles to destabilise democracies, as well as for capturing our attention and selling it to shadowy merchants through micro targeting. The popular success of documentaries and essays on the allegedly huge social costs of social media illustrates this.

  • Studies suggest that if individuals regularly clash over political issues online, this is partly due to psychological and socioeconomic factors independent of digital platforms.

  • In economically unequal and less democratic countries, individuals are most often victims of online hostility on social media (e.g., insults, threats, harassment, etc.). A phenomenon which seems to derive from frustrations generated by more repressive social environments and political regimes.

  • individuals who indulge most in online hostility are also those who are higher in status-driven risk taking. This personality trait corresponds to an orientation towards dominance, i.e., a propensity to seek to submit others to one’s will, for instance through intimidation. According to our cross-cultural data, individuals with this type of dominant personality are more numerous in unequal and non-democratic countries.

  • Similarly, independent analyses show that dominance is a key element in the psychology of political conflict, as it also predicts more sharing of “fake news” mocking or insulting political opponents, and more attraction to offline political conflict, in particular.

  • n summary, online political hostility appears to be largely the product of the interplay between particular personalities and social contexts repressing individual aspirations. It is the frustrations associated with social inequality that have made these people more aggressive, activating tendencies to see the world in terms of “us” vs “them”.

  • On a policy level, if we are to bring about a more harmonious Internet (and civil society), we will likely have to tackle wealth inequality and make our political institutions more democratic.

  • Recent analyses also remind us that social networks operate less as a mirror than as a distorting prism for the diversity of opinions in society. Outraged and potentially insulting political posts are generally written by people who are more committed to express themselves and more radical than the average person, whether it’s to signal their commitments, express anger, or mobilise others to join political causes.

  • Even when they represent a relatively small proportion of the written output on the networks, moralistic and hostile posts tend to be promoted by algorithms programmed to push forward content capable of attracting attention and triggering responses, of which divisive political messages are an important part.

  • On the other hand, the majority of users, who are more moderate and less dogmatic, are more reluctant to get involved in political discussions that rarely reward good faith in argumentation and often escalate into outbursts of hatred.

  • Social media use seems to contribute to increasing political hostility and polarisation through at least one mechanism: exposure to caricatural versions of the political convictions of one’s rivals.

  • The way in which most people express their political convictions – both on social media and at the coffee machine – is rather lacking in nuance and tactfulness. It tends to reduce opposing positions to demonised caricatures, and is less concerned with persuading the other side than with signaling devotion to particular groups or causes, galvanising people who already agree with you, and maintaining connections with like-minded friends.

6
50
submitted 2 days ago by [email protected] to c/[email protected]

cross-posted from: https://feddit.org/post/317047

in February 2024, the EU Parliament adopted the eIDAS regulation, creating the framework for a "European Digital Identity Wallet". This digital Wallet will enable citizens to identify themselves in a legally binding manner, both online and offline, sign documents, login into websites and share personal data about them with others. Recently, the European Commission published the Architectural Reference Framework (ARF) 1.4 for the technical implementation of the Wallet.

The success of the EU Digital Identity Wallet depends on its ability to gain citizens' trust and establish a resilient infrastructure in our current data-driven economy.

"However, after our analysis, we believe that this goal has been missed," says the digital rights group Epicenter Works.

"We see severe shortcomings in the ARF that either contradict the regulation or ignore important elements of it. These issues, if left unaddressed, could significantly undermine user rights and privacy."

7
62
submitted 3 days ago by [email protected] to c/[email protected]

Archived link

Original article behind paywall

China has long sought to discredit Chinese critics abroad, but targeting a 16-year daughter of a Chinese dissident in the United States by falsely portraying her as a drug user, an arsonist and a prostitute, is an new escalation, one security expert says.

  • U.S. Federal law prohibits severe online harassment or threats, but that appears to be no deterrent to China’s efforts.

  • "They’re exporting their repression efforts and human rights abuses — targeting, threatening and harassing those who dare question their legitimacy or authority even outside China, including right here in the U.S.,” Christopher A. Wray, the director of the Federal Bureau of Investigation, told the American Bar Association in Washington in April.

  • Mr. Wray said China was exerting “intense, almost Mafia-style pressure” to try to silence dissidents now living legally in the United States, including activities online and off, like posting fliers near their homes.

  • Deng Yuwen, a prominent Chinese writer who now lives in exile in the suburbs of Philadelphia in the U.S., has regularly criticized China and its authoritarian leader, Xi Jinping. China’s reaction of late has been severe, with crude and ominously personal attacks online.

  • A covert propaganda network linked to the country’s security services has barraged not just Mr. Deng but also his teenage daughter with sexually suggestive and threatening posts on popular social media platforms, according to researchers at both Clemson University and Meta, which owns Facebook and Instagram.

  • The content, posted by users with fake identities, has appeared in replies to Mr. Deng’s posts on X, the social platform, as well as the accounts of public schools in their community, where the daughter, who is 16, has been falsely portrayed as a drug user, an arsonist and a prostitute.

  • Vulgar comments targeting the girl have also shown up on community pages on Facebook and even sites like TripAdvisor; Patch, a community news platform; and Niche, a website that helps parents choose schools, according to the researchers. As soon as these posts are deleted, Chinese trolls switch to new accounts to leave attacking text and language again.

  • The harassment fits a pattern of online intimidation that has raised alarms in many countries where China’s attacks have become increasingly brazen. The campaign has included thousands of posts the researchers have linked to a network of social media accounts known as Spamouflage or Dragonbridge, an arm of the country’s vast propaganda apparatus.

  • China has long sought to discredit Chinese critics, but targeting a teenager in the United States is an escalation, said Darren Linvill, a founder of the Media Forensics Hub at Clemson, whose researchers documented the campaign against Mr. Deng.

  • Federal law prohibits severe online harassment or threats, but that appears to be no deterrent to China’s efforts.

8
55
submitted 3 days ago by [email protected] to c/[email protected]

Archived version

Emails sent to a Chinese dissident living in the Netherlands over his petition for asylum for his family members detained last year in Thailand were apparently fake, Dutch authorities said Friday.

The announcement was the first public statement from officials in the Netherlands in the unusual case of Gao Zhi, whose family members were stranded for months at a Thai immigration center while en route to the Netherlands and allegedly accused of sending bomb threats.

Based on emails he said he received, Gao at the time alleged that the Dutch Immigration and Naturalization Service had revoked his family’s visas, which would have allowed them to travel to the Netherlands.

He showed purported screenshots of the emails to the media, including one that ultimately said visas for his family members were revoked as they were being investigated for bomb threats made in Thailand. It remains unclear who sent the emails.

Gao declined to forward the emails to The Associated Press at the time, saying he feared this could jeopardize his family’s asylum case. The AP could not verify the authenticity of his claims.

On Friday, Britt Enthoven, a spokesperson for the Dutch Immigration and Naturalization Service, said the “message indeed doesn’t seem to be from” the service.

“I cannot give you any further information about the message,” Enthoven said.

Gao, though critical of the Chinese government online, had never been an activist back home. But his story at the time raised concerns that Chinese authorities may have made the bomb threats in the name of Gao’s family to try and control his political activities abroad.

Gao’s wife and two children were traveling to the Netherlands to join him in June and July last year, and transiting through Thailand. His wife, Liu Fengling, and daughter Gao Han were detained by Thai police for overstaying their visitors visa. His son was not detained.

A spokesperson for the Royal Thai police at the time did not respond to AP queries about the case.

Gao turned to public advocacy to try and get his family out, and was helped by Wang Jingyu, another Chinese dissident living in the Netherlands who had gained prominence after being detained in Dubai for questioning the Chinese death toll figures in the 2020 border clashes with Indian soldiers in the Karakoram mountains.

Gao’s family was released last October, but only managed to travel to the Netherlands with a proper visa earlier this month, he said.

Separately, Gao has since claimed that Wang defrauded him of thousands of dollars while allegedly trying to help him during this process — claims that Wang dismissed as “nonsense” in a message to the AP.

Bob Fu, a U.S.-based activist who runs ChinaAid, a Christian rights organization, and who helped Wang when he was detained in Dubai, said that the group was forced to pay thousands of dollars of phone bills Wang allegedly made while in the Netherlands.

9
101
submitted 3 days ago by [email protected] to c/[email protected]

Temu—the Chinese shopping app that has rapidly grown so popular in the US that even Amazon is reportedly trying to copy it—is "dangerous malware" that's secretly monetizing a broad swath of unauthorized user data, Arkansas Attorney General Tim Griffin alleged in a lawsuit filed Tuesday.

Griffin cited research and media reports exposing Temu's allegedly nefarious design, which "purposely" allows Temu to "gain unrestricted access to a user's phone operating system, including, but not limited to, a user's camera, specific location, contacts, text messages, documents, and other applications."

"Temu is designed to make this expansive access undetected, even by sophisticated users," Griffin's complaint said. "Once installed, Temu can recompile itself and change properties, including overriding the data privacy settings users believe they have in place."

10
68
submitted 3 days ago by [email protected] to c/[email protected]
11
54
submitted 3 days ago by [email protected] to c/[email protected]
  • Bing’s translation and search engine services in China censor more extensively than Chinese competitors’ services do, according to new research.
  • Microsoft has maintained its heavy censorship of China-based services despite growing scrutiny from U.S. lawmakers.
  • Chinese tech firms are motivated to censor less severely, experts say.

Bing’s censorship rules in China are so stringent that even mentioning President Xi Jinping leads to a complete block of translation results, according to new research by the University of Toronto’s Citizen Lab that has been shared exclusively with Rest of World.

The institute found that Microsoft censors its Bing translation results more than top Chinese services, including Baidu Translate and Tencent Machine Translation. Bing became the only major foreign translation and search engine service available in China after Google withdrew from the Chinese market in 2010.

“If you try to translate five paragraphs of text, and two sentences contain a mention of Xi, Bing’s competitors in China would delete those two sentences and translate the rest. In our testing, Bing always censors the entire output. You get a blank. It is more extreme,” Jeffrey Knockel, senior research associate at Citizen Lab, told Rest of World.

12
40
submitted 3 days ago by [email protected] to c/[email protected]

Archived version

This is the result of an investigation by DoubleThink Lab, a research organization in Taiwan.

Beijing assaults Taiwan with a nonstop barrage of conspiracy theories and lies to undermine people’s faith in democracy — and China’s efforts are getting more sophisticated. Taiwan must do even more to fight back, DoubleThink Lab concludes.

Undermining the ruling Democratic Progressive Party DPP’s electoral chances is only one of the objectives of the CCP’s information-manipulation efforts. There are at least three others:

  1. “Selling” the CCP’s governance model to make the prospect of unification more attractive.
  2. Inducing anxiety about Taiwan’s strategic situation and making resistance seem futile by flexing the asymmetry in military power with China and eroding faith that Taiwan’s allies will come to its aid.
  3. Unraveling the fabric of Taiwan’s democracy by undermining people’s attachment to the status quo, driving polarization, and chipping away at trust in institutions and government.

[...] the Chinese Communist Party (CCP) has been making significant progress on two of its main objectives. But the news isn’t all bad.

The CCP has utterly failed to sell Taiwanese voters on its governance model. [...] the majority of people in Taiwan identify as Taiwanese, as opposed to Chinese or both Taiwanese and Chinese, and that they overwhelmingly prefer Taiwan’s independent status quo. Furthermore, less than 10 percent view China as trustworthy. Even the Kuomintang (KMT, or Chinese Nationalist Party), Taiwan’s former authoritarian ruling party, took steps to distance itself from Beijing before the election.

Why did the CCP fail in attracting Taiwanese voters to its governance model? No doubt certain hard realities of recent years were simply too difficult to overcome: The CCP government locked covid patients in their apartments and left them to die in building fires; it committed horrific human-rights violations against China’s Uyghur minority and crushed civil society in Hong Kong; and it persistently lobs military threats and engages in diplomatic bullying against Taiwan and others, all while the Chinese economy continues to slide. This makes for a tough sell.

But that is no cause for complacency. CCP-controlled social-media platforms may offer a new avenue for appealing to Taiwanese citizens. Research has correlated TikTok use with increased pro-China views among apolitical audiences and Taiwan People’s Party (TPP) supporters, who had previously been independent or swing voters. [...] Additionally, Doublethink’s research on WeChat influencers found that instructions provided to apolitical-content creators in Taiwan trying to sell products to China advise that 10 percent of their feeds should consist of pro-unification content for algorithmic optimization. We believe that the CCP is using its control of lucrative social-media algorithms to encourage influencers to slip what is essentially propaganda into otherwise apolitical content. [...] The CCP’s information-manipulation narratives strike at Taiwan’s democracy in numerous ways: Narratives about government corruption undermine faith in democracy as a system that delivers for society; narratives about election fraud cast doubt on electoral processes and democracy’s legitimacy; and emotionally manipulative content helps to polarize society. Polarization, in turn, can undermine the legislative process — encouraging lawmakers to grandstand for partisan audiences, close space for discussion and concessions, and ride roughshod over democratic processes. [...] In the aggregate, Taiwan’s voters report satisfaction with democracy and trust in electoral processes. [...] The polarization that Taiwan is now seeing has been driven in part by long-running CCP information-manipulation campaigns pushing disinformation and conspiracy theories about government corruption and antidemocratic behavior. [...] [Taiwan's] resilience has rightly been credited to a tireless and dynamic whole-of-society response. Doublethink Lab commissioned an international-elections expert to develop a model capturing the key components of this approach. The result is expressed with the acronym “POWER”: Taiwan’s response is purpose driven, with a diverse range of citizens rallying around an existential threat; organic, driven from the bottom-up and decentralized [...]

13
37
submitted 4 days ago by [email protected] to c/[email protected]
14
146
submitted 5 days ago by [email protected] to c/[email protected]
15
37
submitted 5 days ago by [email protected] to c/[email protected]
16
69
submitted 6 days ago by [email protected] to c/[email protected]

A company that verifies the identities of TikTok, Uber, and X users, sometimes by processing photographs of their faces and pictures of their drivers’ licenses, exposed a set of administrative credentials online for more than a year potentially allowing hackers to access that sensitive data, according to screenshots and data obtained by 404 Media.

The Israel-based company, called AU10TIX, offers what it describes on its website as “full-service identity verification solutions.” This includes verifying peoples’ identity documents, conducting “liveness detection” in a real-time video stream with the user, and performing age verification, where a service will predict how old someone is based on their uploaded photo. AU10TIX also includes the logos of other companies on its site, such as Fiverr, PayPal, Coinbase, LinkedIn, and Upwork, some of which confirmed to 404 Media they are active or former AU10TIX clients.

The news comes as more social networks and pornography sites move towards an identity or age verification model, in which users are required to upload their real identity documents in order to access certain services. The breach highlights that identity services could themselves become a target for hackers. The cybersecurity researcher did not distribute the data beyond providing screenshots and some data to 404 Media for verification purposes.

“My personal reading of this situation is that an ID Verification service provider was entrusted with people's identities and it failed to implement simple measures to protect people's identities and sensitive ID documents,” Mossab Hussein, chief security officer at cybersecurity firm spiderSilk, and who alerted 404 Media to the exposed credentials, said.

17
24
submitted 6 days ago by [email protected] to c/[email protected]
18
10
submitted 6 days ago by [email protected] to c/[email protected]
  • Threat actors in the cyberespionage ecosystem are engaging in an increasingly disturbing trend of using ransomware as a final stage in their operations for the purposes of financial gain, disruption, distraction, misattribution, or removal of evidence.
  • This report introduces new findings about notable intrusions in the past three years, some of which were carried out by a Chinese cyberespionage actor but remain publicly unattributed.
  • Our findings indicate that ChamelGang, a suspected Chinese APT group, targeted the major Indian healthcare institution AIIMS and the Presidency of Brazil in 2022 using the CatB ransomware. Attribution information on these attacks has not been publicly released to date.
  • ChamelGang also targeted a government organization in East Asia and critical infrastructure sectors, including an aviation organization in the Indian subcontinent.
  • In addition, a separate cluster of intrusions involving off-the-shelf tools BestCrypt and BitLocker have affected a variety of industries in North America, South America, and Europe, primarily the US manufacturing sector.
  • While attribution for this secondary cluster remains unclear, overlaps exist with past intrusions that involve artifacts associated with suspected Chinese and North Korean APT clusters.
19
1
submitted 6 days ago by [email protected] to c/[email protected]

cross-posted from: https://feddit.org/post/176888

GitCode, a git-hosting website operated Chongqing Open-Source Co-Creation Technology Co Ltd and with technical support from CSDN and Huawei Cloud.

It is being reported that many users' repository are being cloned and re-hosted on GitCode without explicit authorization.

There is also a thread on Ycombinator (archived link)

20
31
submitted 1 week ago by [email protected] to c/[email protected]

Archived link

  • A previously undocumented Chinese-speaking threat actor codenamed SneakyChef has been linked to an espionage campaign primarily targeting government entities across Asia and EMEA (Europe, Middle East, and Africa) with SugarGh0st malware since at least August 2023.

  • SneakyChef uses lures that are scanned documents of government agencies, most of which are related to various countries' Ministries of Foreign Affairs or embassies, according to security analysts.

21
62
submitted 1 week ago by [email protected] to c/[email protected]

Archived link

"The Stanford Internet Observatory continues its important work following the departure of founding director Alex Stamos under the leadership of faculty director Jeff Hancock, whose research program focuses on areas of trust, deception and online harms; social media and well-being; and, AI in human communication," theorganization says on its website.

"Stanford has not shut down or dismantled SIO as a result of outside pressure. SIO does, however, face funding challenges as its founding grants will soon be exhausted."

As a result, SIO continues to actively seek support for its research and teaching programs under new leadership.

22
25
submitted 1 week ago by [email protected] to c/[email protected]

Why is this still so funny to me?

23
244
submitted 1 week ago by [email protected] to c/[email protected]
24
33
submitted 1 week ago by [email protected] to c/[email protected]
25
41
submitted 1 week ago by [email protected] to c/[email protected]

In spring, 2018, Mark Zuckerberg invited more than a dozen professors and academics to a series of dinners at his home to discuss how Facebook could better keep its platforms safe from election disinformation, violent content, child sexual abuse material, and hate speech. Alongside these secret meetings, Facebook was regularly making pronouncements that it was spending hundreds of millions of dollars and hiring thousands of human content moderators to make its platforms safer. After Facebook was widely blamed for the rise of “fake news” that supposedly helped Trump win the 2016 election, Facebook repeatedly brought in reporters to examine its election “war room” and explained what it was doing to police its platform, which famously included a new “Oversight Board,” a sort of Supreme Court for hard Facebook decisions.

At the time, Joseph and I published a deep dive into how Facebook does content moderation, an astoundingly difficult task considering the scale of Facebook’s userbase, the differing countries and legal regimes it operates under, and the dizzying array of borderline cases it would need to make policies for and litigate against. As part of that article, I went to Facebook’s Menlo Park headquarters and had a series of on-the-record interviews with policymakers and executives about how important content moderation is and how seriously the company takes it. In 2018, Zuckerberg published a manifesto stating that “the most important thing we at Facebook can do is develop the social infrastructure to build a global community,” and that one of the most important aspects of this would be to “build a safe community that prevents harm [and] helps during crisis” and to build an “informed community” and an “inclusive community.”

Several years later, Facebook has been overrun by AI-generated spam and outright scams. Many of the “people” engaging with this content are bots who themselves spam the platform. Porn and nonconsensual imagery is easy to find on Facebook and Instagram. We have reported endlessly on the proliferation of paid advertisements for drugs, stolen credit cards, hacked accounts, and ads for electricians and roofers who appear to be soliciting potential customers with sex work. Its own verified influencers have their bodies regularly stolen by “AI influencers” in the service of promoting OnlyFans pages also full of stolen content.

Meta still regularly publishes updates that explain what it is doing to keep its platforms safe. In April, it launched “new tools to help protect against extortion and intimate image abuse” and in February it explained how it was “helping teens avoid sextortion scams” and that it would begin “labeling AI-generated images on Facebook, Instagram, and Threads,” though the overwhelming majority of AI-generated images on the platform are still not labeled. Meta also still publishes a “Community Standards Enforcement Report,” where it explains things like “in August 2023 alone, we disabled more than 500,000 accounts for violating our child sexual exploitation policies.” There are still people working on content moderation at Meta. But experts I spoke to who once had great insight into how Facebook makes its decisions say that they no longer know what is happening at the platform, and I’ve repeatedly found entire communities dedicated to posting porn, grotesque AI, spam, and scams operating openly on the platform.

Meta now at best inconsistently responds to our questions about these problems, and has declined repeated requests for on-the-record interviews for this and other investigations. Several of the professors who used to consult directly or indirectly with the company say they have not engaged with Meta in years. Some of the people I spoke to said that they are unsure whether their previous contacts still work at the company or, if they do, what they are doing there. Others have switched their academic focus after years of feeling ignored or harassed by right-wing activists who have accused them of being people who just want to censor the internet.

Meanwhile, several groups that have done very important research on content moderation are falling apart or being actively targeted by critics. Last week, Platformer reported that the Stanford Internet Observatory, which runs the Journal of Online Trust & Safety is “being dismantled” and that several key researchers, including Renee DiResta, who did critical work on Facebook’s AI spam problem, have left. In a statement, the Stanford Internet Observatory said “Stanford has not shut down or dismantled SIO as a result of outside pressure. SIO does, however, face funding challenges as its founding grants will soon be exhausted.” (Stanford has an endowment of $36 billion.)

Following her departure, DiResta wrote for The Atlantic that conspiracy theorists regularly claim she is a CIA shill and one of the leaders of a “Censorship Industrial Complex.” Media Matters is being sued by Elon Musk for pointing out that ads for major brands were appearing next to antisemitic and pro-Nazi content on Twitter and recently had to do mass layoffs.

“You go from having dinner at Zuckerberg’s house to them being like, yeah, we don’t need you anymore,” Danielle Citron, a professor at the University of Virginia’s School of Law who previously consulted with Facebook on trust and safety issues, told me. “So yeah, it’s disheartening.”

It is not a good time to be in the content moderation industry. Republicans and the right wing of American politics more broadly see this as a deserved reckoning for liberal leaning, California-based social media companies that have taken away their free speech. Elon Musk bought an entire social media platform in part to dismantle its content moderation team and its rules. And yet, what we are seeing on Facebook is not a free speech heaven. It is a zombified platform full of bots, scammers, malware, bloated features, horrific AI-generated images, abandoned accounts, and dead people that has become a laughing stock on other platforms. Meta has fucked around with Facebook, and now it is finding out.

“I believe we're in a time of experimentation where platforms are willing to gamble and roll the dice and say, ‘How little content moderation can we get away with?,'” Sarah T. Roberts, a UCLA professor and author of Behind the Screen: Content Moderation in the Shadows of Social Media, told me.

In November, Elon Musk sat on stage with a New York Times reporter, and was asked about the Media Matters report that caused several major companies to pull advertising from X: “I hope they stop. Don’t advertise,” Musk said. “If somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go fuck yourself. Is that clear? I hope it is.”

There was a brief moment last year where many large companies pulled advertising from X, ostensibly because they did not want their brands associated with antisemitic or white nationalist content and did not want to be associated with Musk, who has not only allowed this type of content but has often espoused it himself. But X has told employees that 65 percent of advertisers have returned to the platform, and the death of X has thus far been greatly exaggerated. Musk spent much of last week doing damage control, and X’s revenue is down significantly, according to Bloomberg. But the comments did not fully tank the platform, and Musk continues to float it with his enormous wealth.

This was an important moment not just for X, but for other social media companies, too. In order for Meta’s platforms to be seen as a safer alternative for advertisers, Zuckerberg had to meet the extremely low bar of “not overtly platforming Nazis” and “didn’t tell advertisers to ‘go fuck yourself.’”

UCLA’s Roberts has always argued that content moderation is about keeping platforms that make almost all of their money on advertising “brand safe” for those advertisers, not about keeping their users “safe” or censoring content. Musk’s apology tour has highlighted Roberts’s point that content moderation is for advertisers, not users.

“After he said ‘Go fuck yourself,’ Meta can just kind of sit back and let the ball roll downhill toward Musk,” Roberts said. “And any backlash there has been to those brands or to X has been very fleeting. Companies keep coming back and are advertising on all of these sites, so there have been no consequences.”

Meta’s content moderation workforce, which it once talked endlessly about, is now rarely discussed publicly by the company (Accenture was at one point making $500 million a year from its Meta content moderation contract). Meta did not answer a series of detailed questions for this piece, including ones about its relationship with academia, its philosophical approach to content moderation, and what it thinks of AI spam and scams, or if there has been a shift in its overall content moderation strategy. It also declined a request to make anyone on its trust and safety teams available for an on-the-record interview. It did say, however, that it has many more human content moderators today than it did in 2018.

“The truth is we have only invested more in the content moderation and trust and safety spaces,” a Meta spokesperson said. “We have around 40,000 people globally working on safety and security today, compared to 20,000 in 2018.”

Roberts said content moderation is expensive, and that, after years of speaking about the topic openly, perhaps Meta now believes it is better to operate primarily under the radar.

“Content moderation, from the perspective of the C-suite, is considered to be a cost center, and they see no financial upside in providing that service. They’re not compelled by the obvious and true argument that, over the long term, having a hospitable platform is going to engender users who come on and stay for a longer period of time in aggregate,” Roberts said. “And so I think [Meta] has reverted to secrecy around these matters because it suits them to be able to do whatever they want, including ramping back up if there’s a need, or, you know, abdicating their responsibilities by diminishing the teams they may have once had. The whole point of having offshore, third-party contractors is they can spin these teams up and spin them down pretty much with a phone call.”

Roberts added “I personally haven’t heard from Facebook in probably four years.”

Citron, who worked directly with Facebook on nonconsensual imagery being shared on the platform and system that automatically flags nonconsensual intimate imagery and CSAM based on a hash database of abusive images, which was adopted by Facebook and then YouTube, said that what happened to Facebook is “definitely devastating.”

“There was a period where they understood the issue, and it was very rewarding to see the hash database adopted, like, ‘We have this possible technological way to address a very serious social problem,’” she said. “And now I have not worked with Facebook in any meaningful way since 2018. We’ve seen the dismantling of content moderation teams [not just at Meta] but at Twitch, too. I worked with Twitch and then I didn’t work with Twitch. My people got fired in April.”

“There was a period of time where companies were quite concerned that their content moderation decisions would have consequences. But those consequences have not materialized. X shows that the PR loss leading to advertisers fleeing is temporary,” Citron added. “It’s an experiment. It’s like ‘What happens when you don’t have content moderation?’ If the answer is, ‘You have a little bit of a backlash, but it’s temporary and it all comes back,’ well, you know what the answer is? You don’t have to do anything. 100 percent.”

I told everyone I spoke to that, anecdotally, it felt to me like Facebook has become a disastrous, zombified cesspool. All of the researchers I spoke to said that this is not just a vibe.

“It’s not anecdotal, it’s a fact,” Citron said. In November, she published a paper in the Yale Law Journal about women who have faced gendered abuse and sexual harassment in Meta’s Horizon Worlds virtual reality platform, which found the the company is ignoring user reports and expects the targets of this abuse to simply use a “personal boundary” feature to ignore it. The paper notes that “Meta is following the nonrecognition playbook in refusing to address sexual harassment on its VR platforms in a meaningful manner.”

“The response from leadership was like ‘Well, we can’t do anything,’” Citron said. “But having worked with them since 2010, it’s like ‘You know you can do something!’ The idea that they think that this is a hard problem given that people are actually reporting this to them, it’s gobsmacking to me.”

Another researcher I spoke to, who I am not naming because they have been subjected to harassment for their work, said “I also have very little visibility into what’s happening at Facebook around content moderation these days. I’m honestly not sure who does have that visibility at the moment. And perhaps both of these are at least partially explained by the political backlash against moderation and researchers in this space.” Another researcher said “it’s a shitshow seeing what’s happening to Facebook. I don’t know if my contacts on the moderation teams are even still there at this point.” A third said Facebook did not respond to their emails anymore.

Not all of this can be explained by Elon Musk or by direct political backlash from the right. The existence of Section 230 of the Communications Decency Act means that social media platforms have wide latitude to do nothing. And, perhaps more importantly, two state-level lawsuits that have made their way to the Supreme Court that allege social media censorship means that Meta and other social media platforms may be calculating that they could be putting themselves at more risk if they do content moderation. The Supreme Court’s decision on these cases is expected later this week.

The reason I have been so interested in what is happening on Facebook right now is not because I am particularly offended by the content I see there. It’s because Facebook’s present—a dying, decaying, colossus taken over by AI content and more or less left to rot by its owner—feels like the future, or the inevitable outcome, of other social platforms and of an AI-dominated internet. I have been likening zombie Facebook to a dead mall. There are people there, but they don’t know why, and most of what’s being shown to them is scammy or weird.

“It’s important to note that Facebook is Meta now, but the metaverse play has really fizzled. They don’t know what the future is, but they do know that ‘Facebook’ is absolutely not the future,” Roberts said. “So there’s a level of disinvestment in Facebook because they don’t know what the next thing exactly is going to be, but they know it’s not going to be this. So you might liken it to the deindustrialization of a manufacturing city that loses its base. There’s not a lot of financial gain to be had in propping up Facebook with new stuff, but it’s not like it disappears or its footprint shrinks. It just gets filled with crypto scams, phishing, hacking, romance scams.”

“And then poor content moderation begets scammers begets this useless crap content, AI-generated stuff, uncanny valley stuff that people don’t enjoy and it just gets worse and worse,” Roberts said. “So more of that will proliferate in lieu of anything that you actually want to spend time on.”

view more: next ›

Technology

37345 readers
314 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS