IDF/USA AI Combat Automation Ethics

Aaron Linder
4 min readSep 21, 2021

--

Early on in the introduction of Artificial Intelligence during the early 1990’s I was introduced to a popular author and Computer Scientist was Isaac Asimov. Isaac Asimov propagated his “laws of robotics”, which stated: First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The breaking of this assertion came into effect once we came into 6–8 MHz micro-controllers from Raytheon systems was able to make ailerons and rocket engine controls for the “Tomahawk” cruise missiles (Primitive by today’s standards), the computer was able to make simple flight trajectories of it’s designated path in a continuous environment in real time to hit a target with a payload. The PEAS spec being the robot either was launched with a continuous set of instructions based on launch trajectories, or it was controlled by radio for some systems. The environment is a continuous system which is used by distance finders and trigonometric vector-space calculations.

This technology was further improved by Israel, and this paper in general is about IDF automation. Right now we have the Iron Dome, Trophy active defense system, and various drones which are less significant, later I will discuss arguments the Israeli engineers are bringing forward. I am aware academics dislike Wikipedia but it is the #1 source for many subjects and is accurate in this particular Encyclopedic descriptions:

“Iron Dome is a mobile all-weather defense system developed by Rafael Advanced Defense systems, and Israel Aerospace Industries. The system is designed to intercept and destroy short-ranged rockets and artillery shells fired from distances of 4 kilometers to 70 kilometers away and whose trajectory would take them to an Israeli populated Area.”

Second is the Trophy active protection system:

“Trophy is a military active protection system design to protect vehicles from ATGM’s, RPG’s, anti-tank rockets, and tank HEAT rounds. A small number of explosively formed projectiles destroy incoming threats before they hit the vehicle. It’s principle purpose is to supplement the armor of light and heavy armored fighting vehicles…Trophy protects against a wide variety of anti-tank threats, while also maximizing the vehicle’s ability to identify enemy location to crews and combat formation, thereby providing greater survivability and maneuverability in all combat theatres.”

Iron Dome Launching an Interceptor Missile
Tank RPG Trophy Defense System

These missiles share the same model of a P(Perception)- it senses a high velocity target, E(Environment) — continuous environments, A (Actuators) — (missile launch), Sensors — (Thermal Cameras). Following social media the premise of this paper is I have seen people saying,”Well since we have automated attacks now, why don’t we just automate return fire every time attacked?” Theoretically the technology exists, but do we want an automated retaliation every single missile they shoot a missile back? Who would be the blame for such a system. During the war-crimes trial of WWII people stated that they were just following orders, which was not an excuse to that generation, and these people were all executed by firing squad. If such a system exists in my lifetime, I have not yet decided whether to support it. Let me describe the system in more detail of what is being specified.

What is being suggested is to engineer a projectile system that will detect an attack, and instantly return fire on that target without any identification or verification steps. Basically autokilling any object that launches a projectile over a certain velocity (I am sure it can be programmed to kill people throwing rocks even, but more likely just bullet and rocket launches). Such actuators need to be carefully reviewed by someone. The USA and Israel do not really acknowledge any higher authorities than their own elections so I am not sure that international courts apply. I will include a basic figure of the design:

Basically you have 3 steps:

1. Incoming Projectile.

2. Interceptor tracks and fires

3. auto target returns fire.

Is this system ethical? I have difficulties answering. Nobody can think this fast in real time, nobody can identify a target, and it’s not as bad as shooting at motion which is not intelligent. Target threat detection I argue needs to be a focus of computer vision machine learning to quickly and properly train models that identify targets through not just vision but RFID also.

--

--

Aaron Linder

CS graduate WSSU '16, some automotive and nanoengineering in there somewhere.