Commentaries

Autonomous Weapon Systems: Uses and Limitations

AJEY LELE
August 05, 2019

Technology has come to play a vital role in a nation’s defence system. The armed forces have a mutual connection with technology. Many technologies that are routinely used in civilian life, like the Internet or the global positioning system (GPS), actually have a link to or are derived from, military innovations. In the case of Artificial intelligence (AI), the research and development (R&D) in civilian and military domains started almost simultaneously. The militaries are making the investment in this technology for various reasons: first, for juxtaposing it on their existing defence architecture for performance enhancement; second, for developing new types of militarily instruments and weapon systems; and third, to replace it with soldiers doing difficult and life-threatening jobs. In addition, militaries also realise that they encounter various situations where split-second decisions are required to be taken based on multiple inputs and it’s humanly impossible to do so. Hence, increasingly dependence on AI and autonomous systems is growing by the day. 

Autonomous weapons systems could be viewed as systems which are independent in nature. These are the systems that can accomplish any mission independently, without any human intervention. The newly built lethal autonomous weapon system (LAWS) is bearing fruits and a few such systems have already been fully operationalised. It has been predicted that ongoing advancements in LAWS is likely to establish a different context for military applicability. However, a perfect definition of LAWS is not possible since it is problematic to judge what action amounts to ‘lethality’.

Over the years, warfare has evolved from being human-centric to platform-centric, and now more network-centric form of warfare is in vogue. This form of warfare has a major dependency on a state’s cyber, communication and space capabilities. States are increasingly looking at autonomy as an option against armed conflicts. Such weapon systems could come in all shapes and sizes and have varying capabilities. They could be used for engaging targets from a distance, known as non-line-of-sight engagement (NLOS).  Such systems could be major strategic systems like missile defence systems or systems which have limited tactical utility like small drones.

Autonomy should be viewed in a ‘relative’ context. There (always) exists a certain amount of human control on autonomy. Following are the different levels of autonomy and the nature of associated activities:

Level

Human Control

Task

1

Human Operated

Human operator makes all decisions

2

Human Delegated

System performs many functions independently, activation/deactivation human controlled 

3

Human Supervised

Both human and system can initiate behaviours based on sensed data, but the system cannot take decisions on its own

4

Fully Autonomous

Goals given by humans, the task performed by the system, human intervention (delayed) 

 

There are broadly three levels of autonomy: tele-operation; automated; and fully autonomous. Tele-operation is about humans having remote control and has a long history (Great War era); and many such systems are deployed in armed forces currently. The next level of autonomy is ‘automated’ or ‘semiautonomous’. Such systems operate ‘within pre-programmed parameters without the requirement of commands from humans. For example, the intelligence, surveillance and reconnaissance unmanned aerial vehicles (UAVs) are automatic because their flight commands are controlled by on-board systems without any human intervention (but mostly human monitoring continues). Finally, the autonomous systems are ones which have the highest level of autonomy. Such systems decide their operations on their own and can even learn and adapt to new information.

Presently, rapid technological developments are bringing in more autonomy to the weapon systems. Militaries are graduating from semi-autonomous to fully autonomous systems. Currently, ‘dumb’ systems, which are capable of operating autonomously, do exist. Such a system automatically identifies, targets and engages incoming threats.

There are different types of Autonomous Weapon System (AWS) namely: autonomous weapon system (human ‘out-of-the-loop’); supervised autonomous weapon system (human ‘on-the-loop’); and semi-autonomous weapon system (human ‘in-the-loop’). It is important to note that autonomy cannot be absolute. It could be low level autonomy or high level; however, there is no maximum rank.

Few people known that AWS include Loitering Munitions (LMs), Harop Drones, The Long-Term Mine Reconnaissance System (LMRS), etc. LMs are low-cost guided precision munitions that can be maintained in a holding pattern in the air for a certain time and can rapidly attack, on land or sea, non-line-of-sight (NLOS) targets. They are either under the control of an operator or autonomous. Harop Drones are used for suppressing the enemy air defences (SEAD missions). These are loitering systems designed to home in on the radio emissions of enemy air-defence systems and destroy them by crashing into them.

The most successful AWS are the missile & rocket defence weapon systems. Missiles, in general, could be used for an offensive strike and for the purposes of defence. However, no offensive strike weapons are fully autonomous since for any offensive systems in autonomous mode it is very difficult to recognize the target on its own. Suppose if the guided munition activates its seeker without the knowledge of the target, it is likely to get confused and move directionless. More importantly, owing to the fuel limitations, the missile cannot search a wide area to find the target. Hence, in spite of weapons systems depending significantly on AI, they are put in use only when being employed for the defence. Here the systems could be used for long range or short range target detection. There are a certain amount of limitations with these systems. It is expected that every defensive system should intercept every incoming threat. However, in case of a simultaneous attack by a number of missiles, it is not possible for the missile system to address all the threats simultaneously. Also, such systems do not have any answers to cruise missile attack. 

Presently, much discourse on AI and autonomous weapons is found focusing on ethical issues. However, realising the potential dangers of LAWS and misuse of AI, there is a need to ensure that appropriate regulations and legal frameworks are put in place for their usage.

[Author has consulted various Open Sources]

Author Note
Group Captain (Retd.) Ajey Lele (Ph.D) is Senior Fellow in the Institute for Defence Studies and Analyses (IDSA) and heads its Centre on Strategic Technologies.
Tags