Lethal Robotic Weapons Systems Are on the Rise, But So Is the Fight to Stop Them

Here’s a scenario to consider: a military force has purchased a million cheap, disposable flying drones each the size of a deck of cards, each capable of carrying three grams of explosives — enough to kill a single person or, in a “shaped charge,” pierce a steel wall. They’ve been programmed to seek out and “engage” (kill) certain human beings, based on specific “signature” characteristics like carrying a weapon, say, or having a particular skin color. They can be easily transported in a single container and deployed remotely. They can be launched and will fly and kill themselves without the need for human intervention.

Science fiction? No. It could happen tomorrow. It is already possible.

Lethal autonomous weapons systems (LAWS), in fact, have a long and rich history. I spent a few weeks in Japan during the spring of 1972. occupying the physics building Columbia University, New York City. I slept on the ground with 100 other students. We ate donated takeout food and listened to Alan Ginsberg as he honored us with his poetry. I then wrote leaflets and commandeered a Xerox machine in order to print them.

We chose the Physics building to house the department because it was the only one available. The answer: to convince five Columbia faculty physicists to sever their connections with the Pentagon’s Jason Defense Advisory Group, a program offering money and lab space to support basic scientific research that might prove useful for U.S. war-making efforts. Our specific objection: to the involvement of Jason’s scientists in designing parts of what was then known as the “automated battlefield” for deployment in Vietnam. That system would indeed prove a forerunner of the lethal autonomous weapons systems that are poised to become a potentially significant part of this country’s — and the world’s — armory.

Early (Semi–)Autonomous Weapons

Washington had to deal with many strategic issues in prosecuting the war in Indochina. These included the corruption and unpopularity that the South Vietnamese regime was supporting. Its biggest military challenge, however, was probably North Vietnam’s continual infiltration of personnel and supplies on what was called the Ho Chi Minh Trail, which ran from north to south along the Cambodian and Laotian borders. The Trail was actually a network of easily fixed dirt roads and footpaths as well as streams and rivers. It was covered by thick jungle canopy, making it almost impossible for the air to detect movement.

The U.S. response was developed by Jason in 1966, and deployed the next year. It was an attempt to stop that infiltration. creating an automated battlefield composed of four parts, analogous to a human body’s eyes, nerves, brain, and limbs. The eyes were a broad variety of sensors — acoustic, seismic, even chemical (for sensing human urine) — most dropped by air into the jungle. The nerve equivalents transmitted signals to the “brain.” However, since the sensors had a maximum transmission range of only about 20 miles, the U.S. military had to constantly fly aircraft above the foliage to catch any signal that might be tripped by passing North Vietnamese troops or transports. The planes would then relay the information to the brain. (Originally, remote control was intended for these aircraft. However, they performed so poorly that humans were often needed to pilot them.

And that brain, a magnificent military installation secretly built in Thailand’s Nakhon Phanom, housed two state-of-the-art IBM mainframe computers. As they tried to understand the stream of data sent by the planes, a small army of programmers wrote the code and rewrote it. The target coordinates that they had created were then transmitted to the attack planes, which were the equivalents of limbs. Task Force Alpha was the group responsible for running the automated battlefield. The whole project was code-named Igloo White.

As it turned out Igloo White wasn’t so bad after all. an expensive failureIt costs about a billion dollars A year for five years (almost $40 billion total in today’s dollars). The system was ineffective because of the time delay between sensor tripping or munitions dropping. Task Force Alpha often carpet bombed areas where a single sensor had gone off, as a result. The North Vietnamese quickly figured out how these sensors work and devised ways to fool them, including truck-ignition recordings and planting buckets full of urine.

Given the history of semi-automated weapons systems like drones and “smart bombs” in the intervening years, you probably won’t be surprised to learn that this first automated battlefield couldn’t discriminate between soldiers and civilians. In this, they merely continued a trend that’s existed since at least the eighteenth century in which wars routinely kill more civilians than combatants.

These shortcomings did not stop the Defense Department officials looking at the automated battlefield in awe. Andrew Cockburn described This is his worshipful posture in the book Kill Chain: The Rise of the High-Tech Assassins, quoting Leonard Sullivan, a high-ranking Pentagon official who visited Vietnam in 1968: “Just as it is almost impossible to be an agnostic in the Cathedral of Notre Dame, so it is difficult to keep from being swept up in the beauty and majesty of the Task Force Alpha temple.”

You might wonder who or what was worshipped at such a temple.

Most aspects of that Vietnam-era “automated” battlefield actually required human intervention. Human beings were responsible for programming the computers, planting the sensors and piloting the planes. They also released the bombs. In what sense, then, was that battlefield “automated”? The system had eliminated human intervention at one crucial point in the process, which was a sign of things to come. It was the decision to kill. The computers made the decisions about where and when to drop bombs.

In 1969, Army Chief Of Staff William Westmoreland was appointed expressed his enthusiasm This is to remove the human element from war-making. He made the following statement at a luncheon for the Association of the U.S Army (a lobbying group), on April 12, 2010.

“On the battlefield of the future enemy forces will be located, tracked, and targeted almost instantaneously through the use of data links, computer-assisted intelligence evaluation, and automated fire control. With first round kill probabilities approaching certainty, and with surveillance devices that can continually track the enemy, the need for large forces to fix the opposition will be less important.”

What Westmoreland meant by “fix the opposition” was kill the enemy. Another military euphemism in the twenty-first century is “engage.” In either case, the meaning is the same: the role of lethal autonomous weapons systems is to automatically find and kill human beings, without human intervention.

New LAWS for a New Age — Lethal Autonomous Weapons Systems

The British Broadcasting Corporation sponsors four lectures each autumn by experts in a particular field of study. In 2021, the BBC invited Stuart Russell, professor of computer science and founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley, to deliver those “Reith Lectures.” His general subject was the future of artificial intelligence (AI), and the second lecture was entitled “The Future Role of AI in Warfare.” In it, he addressed the issue of lethal autonomous weapons systems, or LAWS, which the United Nations defines as “weapons that locate, select, and engage human targets without human supervision.”

Russell’s main point, eloquently made, was that, although many people believe lethal autonomous weapons are a potential future nightmare, residing in the realm of science fiction, “They are not. You can purchase them right now. They are advertised on the web.”

I’ve never seen any of the movies in the Terminator But military planners and their PR flacks seem to assume that most people get their understanding of these LAWS from this dystopian world. Pentagon officials are frequently at pains to explain why the weapons they are developing are not, in fact, real-life equivalents of SkyNet — the worldwide communications network that, in those films, becomes self-conscious and decides to eliminate humankind. Not to worry, as a deputy secretary of defense told Russell, “We have listened carefully to these arguments and my experts have assured me that there is no risk of accidentally creating SkyNet.”

Russell’s point, however, was that a weapons system doesn’t need self-awareness to act autonomously or to present a threat to innocent human beings. It does however need:

  • A mobile platform (anything that can be moved, from a tiny quadcopter up to a fixed-wing plane)
  • Sensory ability (the ability to detect sound or visual information)
  • The ability to make tactical decision (the same type of capacity that is found in computer programs for playing chess).
  • The ability to “engage,” i.e. Kill (which can be as complex as dropping a bomb or firing a missile at a target, or as simple as committing robot suicide by hitting a target and exploding).

These systems are already available, it’s just a fact. Indeed, a government-owned weapons company in Turkey recently advertised its Kargu drone — a quadcopter “the size of a dinner plate,” as Russell described it, which can carry a kilogram of explosives and is capable of making “anti-personnel autonomous hits” with “targets selected on images and face recognition.” The company’s site has since been altered to emphasize its adherence to a supposed “man-in-the-loop” principle. However, the U.N. has reported A fully autonomous Kargu-2 was actually deployed in Libya in 2020.

You can buy your own quadcopter right now on Amazon, although you’ll still have to apply some DIY computer skills if you want to get it to operate autonomously.

It is unlikely that lethal autonomous weapon systems will look anything like something from the movies, however. Terminator Movies that look more like swarms a tiny killer bots. Computer miniaturization means that technology exists already to create effective LAWS. Your smart phone could fly and become an autonomous weapon. Newer phones use facial recognition software to “decide” whether to allow access. It’s not a leap to create flying weapons the size of phones, programmed to “decide” to attack specific individuals, or individuals with specific features. Indeed, it’s likely such weapons already exist.

Can We Outlaw LAWS

So, what’s wrong with LAWS, and is there any point in trying to outlaw them? Some critics claim that they eliminate the responsibility of humans for making lethal decisions. These critics claim that unlike a human being pulling the trigger on a rifle, a LAWS can choose to fire at its own targets. This is where lies the special danger of these systems. They will inevitably make errors, as anyone who doesn’t recognize their iPhone will admit.

In my view, the issue isn’t that autonomous systems remove human beings from lethal decisions. Human beings will still be responsible for the deployment of imperfect lethal systems if they make mistakes. Human beings design and deploy LAWS, and are therefore responsible for their consequences. Like the semi-autonomous drones of the present moment (often piloted from half a world away), lethal autonomous weapons systems don’t remove human moral responsibility. They only increase the distance between target and killer.

These systems can kill without discrimination, just like other banned arms, such as chemical and biological weapons. Although they do not negate human responsibility, once activated they can be evaded by humans just like poison gas and a weaponized viral.

As with chemical, biological and nuclear weapons, they could be effectively prevented by international law or treaties. True, there are rogue players like the Assad regime in Syria Or the U.S. Military in the Iraqi city of FallujahWhile some may violate these restrictions, the prohibitions on certain types of potentially destructive weaponry have been in place for more than a century.

American defense experts argue that because adversaries will undoubtedly develop LAWS, commonsense requires that this country does the same. This implies that an identical weapon system is the best defense. This makes no sense, as it is like fighting fire with flame when water is the better option.

Convention on Certain Conventional Weapons

International law governs human treatment in war. for historical reasonsInternational humanitarian law (IHL). 1995 saw the United States ratify an amendment to IHL: The 1980 U.N. Convention on Certain Conventional Weapons. Its full title is more extensive, but it is commonly abbreviated as CCW. It governs, for example, the use of incendiary weapon like napalm as well as biological or chemical agents.

CCW signatories meet periodically to discuss which other weaponry might be under its jurisdiction, and to discuss any prohibitions, such as LAWS. The December 2021 was the last conference. Despite this, transcripts of the proceedings exist, only a draft final document — produced before the conference opened — has been issued. This may be because there wasn’t a consensus on how to define such systems and whether they should not be prohibited. The European Union, U.N., at most 50 signatory countries, and the majority of the world’s population agree that autonomous weapons systems should not be allowed to exist. Along with a few others, the U.S., Israel and Russia disagree.

Prior to such CCW meetings, a Group of Government Experts (GGE) convenes, ostensibly to provide technical guidance for the decisions to be made by the Convention’s “high contracting parties.” In 2021, the GGE was unable to reach a consensus about whether such weaponry should be outlawed. The United States believed that even defining lethal autonomous weapons was unnecessary. This could have been because they could be made illegal. The U.S. delegation put it this way:

“The United States has explained our perspective that a working definition should not be drafted with a view toward describing weapons that should be banned. This would be — as some colleagues have already noted — very difficult to reach consensus on, and counterproductive. Because there is nothing intrinsic in autonomous capabilities that would make a weapon prohibited under IHL, we are not convinced that prohibiting weapons based on degrees of autonomy, as our French colleagues have suggested, is a useful approach.”

The U.S. delegation was similarly keen to eliminate any language that might require “human control” of such weapons systems:

“[In] our view IHL does not establish a requirement for ‘human control’ as such… Introducing new and vague requirements like that of human control could, we believe, confuse, rather than clarify, especially if these proposals are inconsistent with long-standing, accepted practice in using many common weapons systems with autonomous functions.”

The same delegation repeated its belief that lethal autonomous arms would actually be good to us because they would surely prove more effective than human beings in distinguishing between civilians, combatants, and others.

Oh, and if you believe that protecting civilians is the reason the arms industry is investing billions of dollars in developing autonomous weapons, I’ve got a patch of land to sell you on Mars that’s going cheap.

The Campaign to Stop Killer Robots

The Governmental Group of Experts also contains 35 non-state members. These include non-governmental organisations and universities. The Campaign to Stop Killer RobotsOne of these is, which includes 180 organizations such as Amnesty International and Human Rights Watch. This lively group, which was established in 2013, provides important commentary on legal, technical, and ethical issues raised by LAWS. It also gives individuals and other organizations a way to get involved in the fight for the prohibition of such potentially dangerous weapons systems.

It is not likely that killer robots will continue to be built and deployed. In fact, the majority of the world would like them to be banned, including U.N. Secretary-General Antonio Guterres. Let’s give him the last word: “Machines with the power and discretion to take human lives without human involvement are politically unacceptable, morally repugnant, and should be prohibited by international law.”

I couldn’t agree more.