this post was submitted on 05 Apr 2024
2 points (100.0% liked)

Improve The News

8 readers
1 users here now

Improve The News is a free news aggregator and news analysis site developed by a group of researchers at MIT and elsewhere to improve your access to trustworthy news. Many website algorithms push you (for ad revenue) into a filter bubble by reinforcing the narratives you impulse-click on. By understanding other people’s arguments, you understand why they do what they do – and have a better chance of persuading them. **What's establishment bias?** The establishment view is what all big parties and powers agree on, which varies between countries and over time. For example, the old establishment view that women shouldn’t be allowed to vote was successfully challenged. ITN makes it easy for you to compare the perspectives of the pro-establishment mainstream media with those of smaller establishment-critical news outlets that you won’t find in most other news aggregators. This Magazine/Community is not affiliated with Improve The News and is an unofficial repository of the information posted there.


**LR (left/right): 1 = left leaning, 3 = neutral, 5 = right leaning** **CP (critical/pro-establishment): 1 = critical, 3 = neutral, 5 = pro**

founded 1 year ago
 
  • As the war in Gaza enters its seventh month, the media has increasingly reported on Israel's use of AI-based technologies to assist it in several areas of the war. On Friday, The Intercept reported that Israel's use of Google Photos violated the company's rules. Intercept
  • Israel has reportedly used Google Photos for its facial recognition program in Gaza, with an Israeli official saying it worked better than any alternative facial recognition tech and assisted in making a "hit list" of alleged Hamas fighters who participated in the Oct. 7 attack. New York Times (LR: 2 CP: 5)
  • The Intercept argued that Israel's use of Google Photos breached the company's terms for "dangerous and illegal activities" when used to "cause serious and immediate harm to people." Google reportedly did not comment on the matter. Intercept
  • Earlier this week, Israeli outlet +972 Magazine reported that Israel has been using an AI-based program named "Lavender" to mark "all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ)." +972 Magazine
  • The report claimed that 37K Palestinians were flagged as suspected militants and their homes were marked for possible air strikes. The magazine added that the military purposefully targeted militants at night, as it was "easier to locate the individuals in their private houses," killing thousands of civilians as a result. +972 Magazine
  • The army also allegedly decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians, breaking from past rules to avoid "collateral damage" in assassinations of low-ranking militants. +972 Magazine

Pro-establishment narrative:

  • AI-based systems in war don't have to be perfect, they just need to be better than humans. Of course, there's always a danger that policymakers will go too far using AI in war, but that doesn't mean that AI-based intelligence gathering and weapons systems have to be limited altogether. The AI arms race is here and countries must adapt to this new frontier.
    FOREIGN POLICY

Establishment-critical narrative:

  • This could very well be a war crime. Israel is using systems that are largely untested and are known to make errors. Still they've put together "kill lists" with as many as 37K names on them, with humans monitoring the AI as nothing more than a rubber stamp rather than a check on the technology's accuracy. This is why there are so many civilian deaths and Israel must be held accountable.
    GUARDIAN (LR: 2 CP: 5)

Technoskeptic narrative:

  • Israel's use of AI in its brutal war in Gaza demonstrates the necessity of approaching technological development with caution. These dystopian programs acquire and kill targets with ruthless efficiency and little oversight. There must be a moratorium on AI-based technologies in war, as they're rapidly being used to commit unspeakable crimes.
    AL JAZEERA (LR: 2 CP: 1)

Nerd narrative:

  • There is a 65% chance that global-catastrophic-risk-focused evaluation of certain AI systems by accredited bodies will become mandatory in the U.S. before 2035, according to the Metaculus prediction community.
    METACULUS (LR: 3 CP: 3)
no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here