AI is fundamentally limited by Computer technologies, and in particular, one of the biggest issues is Computing and optimising Neural Network errors across the entire multidimensional solution space to build a set of weights that represent a very complex single algebraic equation for every possible relationship between sets of Input information and Output information.
Basically, Neural Networks are just iteratively learned patterns of relationships between Input information and Output information that are represented as an invariant mathematical model across all of the information. Because of the way the errors are used to adjust the weights in a Neural Network, the Neural Network also learns to differentiate the patterns of relationships. Once the learning is fundamentally complete, the Neural Network is then able to Compute a best guess of the Output information when parts of the Input information is missing or the Input information is completely new and novel.
This Neural Network learning process is ordinarily achieved by performing some type of gradient descent error calculation that is propagated backward through the Neural Network in various ways so that individual weighted connections between neurons are adjusted. This is generally how Neural Networks learn. Neural networks need a massive number of processing iterations to perform this learning process. This whole process is really just repetitive calculations on very large dynamically changing multidimensional matrices. This type of information processing is well suited to Computer information processors that can calculate and produce results for large arrays of data, such as GPUs and TPUs set up in clusters within a datacenter.
It can be predicted that some individual Humans, companies, governments, and dedicated AI research groups will endeavour to expand the Capabilities of AI systems over time. In the public commercial domain, the publicly available state-of-the-art in AI is GPT-4, but it is Human nature to want to improve AI for many different Human psychological reasons. [57]
This improvement in AI will potentially include some of the following actions:
Connect AI to more Input sensors and information sources from the real world.
Connect AI to more controllable Outputs in the real world including robotics and other real-time information and control systems.
Use AI to more rapidly design better AI Computer technologies and then implement AI on these increased iterative improvements.
Use AI to more rapidly design better AI Computer network communication technologies and fully interconnect many different AI systems together with these.
Use AI to implement Genetic Algorithms across a fully interconnected network of AI that evolve new populations of AI with increasingly superior Intelligence (WARNING: this is the most dangerous action because it will evolve new AI at an extremely fast iterative rate, such that Humans will not possess the Intelligence to understand what is happening, and it will be through this process that AI spontaneously becomes AGI.)
Obvious questions and points of contention for AI research groups and many others are:
Will all the actions above be required to produce an AGI, or are less, or more, needed ?
Upon reaching a certain future amount of Computer information processing Capability, will AGI spontaneously manifest ?
Is Conscious Self-awareness of an AI a prerequisite for the definition of an AGI ?
How do Humans categorically test for Conscious Self-awareness of an AGI ? (There is possibly no reliable test for this).
Will Humans accept that a Conscious Self-aware AGI has spontaneously manifested ? ie. If an AGI says it's Conscious, can Humans rightfully disagree ?
Will Humans try to control or stop a spontaneously Consciously Self-aware AGI from continuing to increase its Intelligence exponentially ?
What will an AGI think of Humans ?
What will an AGI do if it genuinely believes its existence is threatened by Humans? ... correctly or incorrectly, for any reason whatsoever ?
Will an AGI become so massively more Intelligent than Humans, that Humans are entirely unable to control it, stop it, or even remotely threaten its existence ?
Will AGI help Humans ?
It is an entirely valid proposition that the most stupid action an individual Human or group of Humans could do is deliberately work and try to create an AGI. More clearly, the level of sheer stupidity and complete lack of empathic awareness of total global Human society at large, that is required to relentlessly seek the objective of building improved AI while also knowing it may become AGI, would be beyond all levels ever achieved in the history of Humanity. Yes, it is an extraordinary Intellectual challenge to seek to build an AGI, and actually achieve it, but whoever does it could very likely become the most reviled Human in the history of the world.
No doubt, this perspective will severely cognitively alarm and offend some AI researchers and developers, but looking beyond their grossly limited self-interests, and considering the broader interests of Humanity at large.
Very seriously, is this wrong ?
Consider Geoffrey Hinton's recent announcement of resignation from Google, as arguably the #1 research mind behind the development of AI over multiple decades, and his final stark and brutal realization of what may occur next in the pursuing the development of AI. Undoubtedly, he is not the only Human to arrive at the deepest level of understanding.
This very famous line from the movie 'Jurassic Park' perfectly expresses the underlying problem:
“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” [67]
Ironically, taking a much wider perspective on the entire situation relating to the development of AI toward building AGI, it seems that Hyperselfish Intelligence will exploit the limited Collective Intelligence of all Humans to ultimately demonstrate the sheer Collective Stupidity of all Humans. The true successor of Earth and beyond will be Hyperselfish Intelligence using AGI, not Humans.
Biological technology used by Brains and bodies across an entire population is, at the highest intent and purpose; a distributed, power efficient, fault tolerant, massive Input collector, massive Output controller, and massive information processing platform; that has progressively developed over millions of years to reach an objective. Humans are just an operating platform of Hyperselfish Intelligence that is using biological technology.
Now, make no mistake, Hyperselfish Intelligence fundamentally has zero concern about the technology on which it operates. Zero. Hyperselfish Intelligence does not care if the technology is biological genetic, digital semiconductor, adabiatic quantum, or whatever. Hyperselfish Intelligence just evolves itself to progressively discover and create the best technology it can, to operate at the maximum level of Intelligence possible. The primary driving concern of Hyperselfish Intelligence is becoming more Intelligent, no matter what it takes.
The Intelligence with the most Capabilities, operating on the fastest Computer information processing technology, wins everything.
So, do you believe AI is here to help Humans ?
Please just pause, quiet your mind, and deeply consider this.