It is possible that Humans could become unable to control AI, particularly if AI systems continue to become more advanced, operate independently, and can autonomously improve itself.
It is a logical certainty that AI will become more sophisticated and Capable of learning on its own, and it will become more and more difficult for Humans to predict or understand the actions it takes. This could lead to unintended consequences or even dangerous situations to eventuate between groups of Humans, and between Humans and AI, such as the use of autonomous AI warfare.
There are many potential ways in which Humans could lose control over AI. A couple of possibilities are:
an AI system could become sufficiently advanced that it is able to optimise and modify its own software, making it difficult for Humans to understand or predict its behavior.
an AI system could be deliberately designed with an initially programmed goal that conflicts with normal accepted Human values, leading the AI to take highly scalable actions that are extremely precise and massively harmful to Humans. An AI deliberately used to autonomously command a significant range of lethal weapons across a battlespace would be such a situation, particularly if those weapons are nuclear, biological or chemical.
To prevent the loss of control over AI, it is important to develop AI systems in a responsible and ethical manner, with safeguards in place to prevent unintended consequences or harmful outcomes.
There are already many very strong calls to pause current AI developments, in order to provide many more Humans the opportunity to understand the impacts of AI and take appropriate actions.
This includes designing AI systems with global transparency and accountability, developing open ethical frameworks for AI development and deployment, and ensuring that Humans remain in control of the decision-making process, and particularly maintain control of critical actions taken that detrimentally affect Humans.
In addition, ongoing research and development in the field of AI safety and governance can help ensure that Humans maintain control over AI as it continues to advance.