How Israel is testing artificial intelligence in the war against the Palestinians

Last year, the Israeli military launched a new strategy to integrate weapons and AI technology into all military branches – the most radical strategic transformation in decades. Last month, Israel’s defense ministry boasted that the military intends to become an AI “superpower” in the field of autonomous warfare.

“There are those who see AI as the next revolution to change the face of warfare on the battlefield,” retired army general Eyal Zamir said at the Herzliya Conference, an annual security forum. Military applications could include “the ability for platforms to strike in swarms, or for combat systems to operate independently … and to assist in rapid decision-making, on a greater scale than we have ever seen.”

Israel’s defense industry is producing a wide range of autonomous military vessels and vehicles, including a “weaponized robotic vehicle” described as a “robust” and “lethal” platform featuring “automatic target recognition”. An autonomous “secret intelligence-gathering” submarine, dubbed the BlueWhale, has undergone tests.

If all of this scares the shit out of you, it should. Israel is creating not just a Frankenstein monster, but entire swarms capable of wreaking havoc, not only on their Palestinian targets but on anyone anywhere in the world.

The Palestinians are the testing ground for such technologies, which serve as a “proof of concept” for global buyers. Israel’s most likely customers are countries involved in the war. While weapons may offer an advantage on the battlefield, they will ultimately surely increase the overall level of suffering and bloodshed among all participants. They will be able to kill in greater numbers with greater lethality. Because of this, they are monstrous.

Stay informed with MEE newsletters

Sign up to receive the latest alerts, insights and analysis,
starting with Turkey Unpacked

Another new Israeli AI technology, Knowledge Well, not only monitors where Palestinian militants are firing rockets, it can also be used to predict the locations of future attacks.

While such systems can offer Israelis protection from Palestinian weapons, they also allow for an undaunted Israel to become a virtual killing machine, unleashing terrifying assaults on military and civilian targets while facing minimal resistance from its enemies.

Search and destroy

Such technologies offer a warning to the world about how pervasive and intrusive AI has become. Nor is it reassuring when the Israeli military’s chief AI expert says he competes with salaries offered in the private market for AI specialists by providing “significance.” As if that would be any reassurance, he adds that Israel’s AI weapons will for “the foreseeable future … always [have] one person around”.

I leave you to think about how “significant” killing Palestinians can be. Nor is it likely that a human will always control these battlefield weapons. The future holds robots that can think, judge and fight on their own, with little or no human intervention beyond initial programming. They have been described as the “third revolution in warfare after gunpowder and nuclear weapons”.

If humans fighting on a battlefield can go so egregiously wrong, how can we expect AI-powered weapons and robots to do a better job?

While they may be programmed to seek out and destroy the enemy, who determines who the enemy is and makes life and death decisions on the battlefield? We already know that in war humans make mistakes, sometimes terrible ones. Military programmers, despite their experience modeling what armed robots will think and do, are no less prone to error. Their creations will feature enormous behavioral unknowns, which could cost countless lives.

Palestine is one of the most guarded places in the world. CCTV cameras are ever-present in the Palestinian landscape, dominated by Israeli watchtowers, some armed with remote-controlled robotic guns. The drones fly overhead, capable of firing tear gas, firing directly at Palestinians below, or directing fire from ground personnel. In Gaza, constant surveillance instills trauma and fear in residents.

Additionally, Israel now has facial recognition apps, such as Blue Wolf, that aim to capture images of every Palestinian. These images are fed into a huge database that can be mined for any purpose. Software from companies like Anyvision, which can identify huge numbers of individuals, is integrated with systems containing personal information, including social media posts.

It is a network of control that instills fear, paranoia and a sense of hopelessness. As former Israeli army chief of staff Rafael Eitan once said, the goal is to “make the Palestinians run like drugged cockroaches in a bottle.”

Frankenstein’s monster

Many data researchers and privacy advocates have warned of the dangers of AI, both in the public sphere and on the battlefield. AI-powered military robots are just one of many examples, and Israel is at the forefront of such developments. He’s Dr. Frankenstein and this technology is his monster.

Human Rights Watch has called for a ban on such military technology, warning: “Machines cannot understand the value of human life.”

Israeli AI technology may be, at least in the eyes of its creators, intended for the protection and defense of Israelis. But the damage it inflicts fuels a vicious cycle of never-ending violence. The Israeli military and media promoting such magic only create more victims – initially Palestinians, but later each dictatorship or genocidal state that buys these weapons will produce its own batch of victims.

Israel’s Weapons and Spyware: Used on Palestinians, Sold to the World

to know more

Another “achievement” of AI was the Mossad assassination of the father of Iran’s nuclear program, Mohsen Fakhrizadeh, in 2020. The New York Times offered this breathless account: “Iranian agents working for the Mossad had parked a blue Nissan Zamyad pickup truck by the side of the road… In the bed of the truck was a 7.62mm sniper machine gun… The assassin, a skilled sniper, took up a position, calibrated his sights, cocked the weapon and lightly pulled the trigger.

“He was nowhere near Absard [in Iran], However. She was peering at a computer screen in an unfamiliar location over 1,000 miles away…[This operation was] the debut test of a high-tech computerized marksman equipped with artificial intelligence and multi-camera eyes, operated by satellite and capable of firing 600 rounds per minute.

“The upgraded, remote-controlled machine gun now joins the combat drone in the arsenal of high-tech weapons for remote targeted killing. But unlike a drone, the robotic machine gun doesn’t attract attention in the sky, where it could be a downed drone, and can be placed anywhere, qualities that could reshape the worlds of security and espionage.”

We know the dangers posed by autonomous weapons. An Afghan family was brutally killed in a US drone strike in 2021 because one of its members was wrongly identified as a wanted terrorist. We know that the Israeli military has repeatedly killed Palestinian civilians in what it called “blunders” on the battlefield. If humans fighting on a battlefield can go so egregiously wrong, how can we expect AI-powered weapons and robots to do a better job?

This should raise an alarm about the devastating impacts AI is sure to have in the military realm and Israel’s leading role in developing such lethal unregulated weapons.

Opinions expressed in this article are those of the author and do not necessarily reflect the editorial policy of Middle East Eye.

#Israel #testing #artificial #intelligence #war #Palestinians
Image Source : www.middleeasteye.net

Leave a Comment