- As the war in Gaza enters its seventh month, the media has increasingly reported on Israel's use of AI-based technologies to assist it in several areas of the war. On Friday, The Intercept reported that Israel's use of Google Photos violated the company's rules. Intercept
- Israel has reportedly used Google Photos for its facial recognition program in Gaza, with an Israeli official saying it worked better than any alternative facial recognition tech and assisted in making a "hit list" of alleged Hamas fighters who participated in the Oct. 7 attack. New York Times (LR: 2 CP: 5)
- The Intercept argued that Israel's use of Google Photos breached the company's terms for "dangerous and illegal activities" when used to "cause serious and immediate harm to people." Google reportedly did not comment on the matter. Intercept
- Earlier this week, Israeli outlet +972 Magazine reported that Israel has been using an AI-based program named "Lavender" to mark "all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ)." +972 Magazine
- The report claimed that 37K Palestinians were flagged as suspected militants and their homes were marked for possible air strikes. The magazine added that the military purposefully targeted militants at night, as it was "easier to locate the individuals in their private houses," killing thousands of civilians as a result. +972 Magazine
- The army also allegedly decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians, breaking from past rules to avoid "collateral damage" in assassinations of low-ranking militants. +972 Magazine
Pro-establishment narrative:
- AI-based systems in war don't have to be perfect, they just need to be better than humans. Of course, there's always a danger that policymakers will go too far using AI in war, but that doesn't mean that AI-based intelligence gathering and weapons systems have to be limited altogether. The AI arms race is here and countries must adapt to this new frontier.
FOREIGN POLICY
Establishment-critical narrative:
- This could very well be a war crime. Israel is using systems that are largely untested and are known to make errors. Still they've put together "kill lists" with as many as 37K names on them, with humans monitoring the AI as nothing more than a rubber stamp rather than a check on the technology's accuracy. This is why there are so many civilian deaths and Israel must be held accountable.
GUARDIAN (LR: 2 CP: 5)
Technoskeptic narrative:
- Israel's use of AI in its brutal war in Gaza demonstrates the necessity of approaching technological development with caution. These dystopian programs acquire and kill targets with ruthless efficiency and little oversight. There must be a moratorium on AI-based technologies in war, as they're rapidly being used to commit unspeakable crimes.
AL JAZEERA (LR: 2 CP: 1)
Nerd narrative:
- There is a 65% chance that global-catastrophic-risk-focused evaluation of certain AI systems by accredited bodies will become mandatory in the U.S. before 2035, according to the Metaculus prediction community.
METACULUS (LR: 3 CP: 3)