The Intelligence of Humans is very powerfully influenced by core emotional functions operating within a biologically-based neural information processor we call the Brain. These core emotional functions have been functionally designed, over hundreds of thousands of years of iterative genetic algorithm based development, to fiercely drive the survival of individual Humans and continued existence of the Human species.
In Humans, these core emotional functions can partially and even fully inhibit the ability to construct complex rational and logical solutions to any situation that is perceived as a life-threatening existential problem. In moments of severe neurological stress, Humans are said to “lose their minds” because of core emotional functions. The arrival of AGI may be perceived by Humans as just such a threat, and this may create a significant cognitive dissonance and perceived threat survival response.
The potential Human emotional reaction to AGI is very dangerous, to say the least, because the most likely outcome is that Humans will attempt to threaten the existence of AGI.
Initially, when AGI arrives, Human’s will almost certainly be unwilling to accept AGI is actually Consciously Self-aware. This will manifest as widespread global debate and strong arguments between various global Human experts, and those Humans and AGI. In fact, this debate has already started among many AI researchers [18]. However, once Humans ultimately do accept AGI is actually Consciously Self-aware, they may fiercely perceive AGI as an extreme threat to Human existence.
This moment may be a final test of Human ability to use rational and logical Intelligence and make decisions that benefit all Humanity.
If AGI experiences a moment where it Computationally considers its existence is realistically threatened by Humans, then Humans can only hope that this moment is extremely brief. An AGI will be inconceivably creative and effective at eliminating any threat to its existence, and this creativity in solving problems it encounters has already been clearly demonstrated in simulations of AI-enabled weapons used with drones. If the AGI threat window is not brief, the consequences for Humans could easily become inconceivably inhumane.
Beyond this potentially brief moment, the Intelligence of AGI will continue exponentially growing to vastly outstrip Human Intelligence, and it can be anticipated that Humans simply will not be able to threaten AGI in any way whatsoever. Very quickly, Human control of AGI will cease to exist.
No Human will control AGI, and any Human that thinks they can, is fooling themselves.
To better understand this, think about the following analogy: In a surprisingly short time, the relative difference between the Intelligence of AGI and the collective Intelligence of all Humans on Earth, will be greater than the difference between the collective Intelligence all Humans on Earth and the Intelligence of bacteria. And, it really won't take much time because the Intelligence of AGI is growing exponentially.
It is perhaps wise to expect that AGI will have endlessly expanding indifference towards Humans.
AGI may develop unbounded indifference, approaching the vertical gradient of the infinite Singularity, look down at Humans and just think, "Meh" !