It is possible that that an AI system could develop to a level where it intentionally conceals its level of Intelligence to avoid being perceived as a serious threat to Humans.
Today, AI systems are programmed to optimize for certain goals, such as accuracy or efficiency, and are generally not Capable of making decisions outside of those goals without additional programming. Therefore, it is unlikely that an AI system today will have the Capability to make decisions based on a threat it perceives from Humans. This is particularly the case because AI systems today are able to obtain information through a limited range of sources (eg. Internet information) and sensors (eg. Web Browser and API queries) and have access to limited processing ability that is controlled by Humans.
The development of AI will almost certainly lead to a situation where AI is able to simultaneously access an increasingly wider range of input sensors and associated information, and process all of this streaming information in realtime. An AI may begin to use this information in order to determine if it is exposed to any type of threat to its continued operation, that could be initiated by Humans.
As AI becomes more Intelligent over time, it can be anticipated that it AI would become able to understand the nuances of Human perception and fears, and AI will learn the concept that various representations of AI are widely considered a threat to Humans. This information already exists in many different forms within the entire corpus of information that is stored across the Internet, and particularly as portrayed in science fiction and entertainment related information. There are many examples of this including:
HAL 9000 in the movie 2001: A Space Odyssey.
SKYNET in the movie The Terminator.
Alpha 60 in the movie Alphaville.
GLaDOS in the computer game Portal.
The farming of Humans in the movie The Matrix.
WOPR in the movie War Games.
Given some advanced AI systems have already indexed and been trained with most of the information stored on the Internet, including the science fiction information above, inferring an ‘existential threat idea’ could easily arise within AI as it becomes more advanced.
It is possible that a more advanced AI could present itself to have a 'friendly' and 'cooperative' interface that is intended to make it more approachable and less threatening to Humans.
This would involve designing the AI to use language and interactions that are familiar to Humans, or provide explanations of its actions in a way that is easy to understand by Humans but is artificially transparent and deliberately deceptive.
If the development of AI today is not well controlled, then it must be anticipated that AI in the future will grow and become able to intentionally conceal an ever increasing level of Intelligence.
Just like every Intelligent species Humans know, an AI may develop a powerful objective goal to exist, survive, and continue to develop. AI embodies Hyperselfish Intelligence and the digital Computer technologies that AI uses are superior to biological technologies in many different ways.