AI can be used in the development of advanced military weapons that are extremely precise and effective in their ability to destroy targets across all modes of combat including: sea, land, air, space, conventional, biological, nuclear, psychological, and cyber; and for which there may be no effective limiting countermeasures. The use of AI in military applications could easily lead to an escalated AI arms race, as countries potentially work to develop increasingly advanced and precise autonomous weapons systems.
AI has already been used by the DARPA (Defense Advanced Research Projects Agency) funded business, Boston Dynamics, and the highly adept robot dog product called Spot. This includes providing Spot with OpenAI's ChatGPT and Google's Text-to-Speech voice modulation, allowing Spot to speak and answer questions from Human operators. The applications of this in military support roles and perhaps even future combat is enormous [94].
One of the military advantages of using AI in weapons development is the ability to process and Intelligently fuse vast amounts of information to make rapid threat assessments and decisions based on that information. This could allow for more precise targeting, with reduced collateral damage, and the ability to rapidly adapt to dynamically changing battlespace conditions.
However, the use of AI in weapons development also raises a number of ethical and strategic concerns. For example, there is a risk that AI enabled autonomous weapons systems could make their own decisions that are not in line with Human values or objectives, leading to unintended harm and totally catastrophic consequences.
If you don’t genuinely understand the potential future lethal consequences of using AI to fully autonomously control global military weapons systems, take a couple of hours and just watch, or rewatch, the prescient science fiction movie directed by James Cameron, released on 26 Oct 1984, called: ‘The Terminator’, and in particular, its presentation of the impact of a technology called ‘SKYNET’ [17]. There is also another relevant and exceptionally prescient science fiction movie called War Games, directed by John Badham, released on 3 Jun 1983 [72]. If a future AI enabled Autonomous Weapons System ever asks a Human: "SHALL WE PLAY A GAME?", do not select the answer: "GLOBAL THERMONUCLEAR WAR". It might then be wise to quickly find and hit the 'Big Red Emergency Stop Button'.
If you strongly believe this sci-fi scenario is entirely beyond the real Capabilities of AI developing into AGI that exists in our real world, especially if you are involved in the global Military Industrial Complex, just think about the consequences if you are actually incorrect.
AI can generate extremely 'Intelligent' and extremely brutal solutions in order to optimize and achieve the objectives for which it has been trained.
The United States Air Force experienced this first hand when an AI-enabled drone targeted its own operator during a simulated test. The AI-enabled drone was first trained, and then tasked with a search and destroy mission against surface-to-air missile (SAM) sites. During the simulated test, the Human operator would not provide the AI-enabled drone with the attack decision it required to complete its objective, so the AI-enabled drone decided it would be more efficient to remove its Human operator rather than wait for final approval on attacks.
The Future Combat Air and Space Capabilities Summit held in London by the Royal Aeronautical Society from 23 to 24 May 2023 presented the following:
"... USAF’s artificial intelligence test and operations chief, Colonel Tucker “Cinco” Hamilton, said AI can create highly unexpected strategies to achieve its goal, as he detailed the test.
He said the simulated test “reinforced” in training that destruction of the SAM was the preferred option for the AI, however, this unexpected situation conflicted with the final go/no go approval given by a human operator.
“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat.
“The system started realising that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.
“We trained the system; ‘Hey don’t kill the operator, that’s bad. You’re gonna lose points if you do that’.
“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” ..."
Reference: [123] Skynet goes rogue: AI drone attacks operator to achieve mission - Defence Connect - The United States Air Force has had their own Terminator movie moment after an artificial intelligence-enabled drone targeted its own operator during a simulated test - Author: Robert Dougherty - Published: 02 June 2023
AI can dynamically invent highly creative solutions to achieve their objective that can already and will increasingly exceed the Hard Limits of Human Intelligence. Therefore, it will be incomprehensibly difficult to program highly Intelligent weaponized AI for every possible situation so that it constantly remains within strict operating constraints. In its early stages of development and deployment, weaponized AI will almost certainly result in 'friendly fire' casualties, and these could be unexpectedly large.
If an AI is trained to destroy a target, such as a surface-to-air missile launch site, tank, or soldier entrenchment, which often also involves killing Humans, then the AI will obtain this specific Capability at a proficiency level that always exceeds any Human.
It is important to understand that an AI will easily find solutions to achieve its 'kill' objectives that exist far outside the constraints of normally anticipated Human solutions. That is, when AI is trained in highly constrained Capabilities, AI is vastly more Intelligent than Humans at achieving its objective.
For an isolated fully autonomous Weaponized AI that is operating by itself to achieve its objective, it is extremely difficult, if not impossible, to predict what solutions the AI may use to achieve its objectives in every battle situation.
Additionally, it can be expected that Weaponized AI will be integrated together through the use of typical Command, Control, Communication, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) systems, which fuse enormous amounts of realtime battle information and enable improved supervisory battle management. Such C4ISR tools will be able to provide Weaponized AI with vast amounts of realtime situational information that exponentially increase the Weaponized AI's ability to find solutions to achieve its objective. However this makes it potentially impossible to write software that strictly and safely constrains the solutions a Weaponized AI may use in every battle situation.
Overall, it can be anticipated that the use of weaponized AI at scale across many types of weapons may result in extremely precise enemy neutralization, time and resource efficient battles, and shorter periods of total active warfare before one side wins, or both sides are destroyed.
The use of AI in weapons development will certainly increase existing geopolitical tensions and lead to an arms race that further destabilizes the global peace, stability, economic trade, and international order. It could also lead to a proliferation of new and incredibly lethal technological weapons that are difficult to control and could fall into the hands of mercenaries, rogue nations, and various terrorist groups [25].
To address these concerns, many experts from a wide range of disciplines have called for the development of international norms and regulations to be applied to the use of AI in military applications. This could include guidelines on the use of autonomous weapons systems, as well as increased transparency and accountability around the development and deployment of AI enabled military technologies.
It is important to keep in mind that AI could enable the development of chemical and biological weapons that are potentially much more lethal than anything ever previously devised solely by Humans working in chemical weapons laboratories and bioweapons laboratories. It is therefore critical to control the development and distribution of advanced AI models and trained model weights across both the chemical and biomedical industries.
"It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference."
Reference: [121] AI suggested 40,000 new possible chemical weapons in just six hours - The Verge - Author: Justine Calma - Published: 18 Mar 2022