The current trends and developments in AI and its potential impact on society are the basis for a fierce global argumentative storm of speculative opinions among various Human AI experts, governments, and the general public.
Keep in mind that the growth of AI and arrival of AGI is totally unexplored territory, so there are actually very few facts, and almost everything that is written and said is opinions. Opinions can be useful, but they are definitely not the same as facts, and must be rationally and critically analyzed.
In relation to whether or when AGI will totally supersede and obsolete Human Intelligence, it is doubtful that any level of useful broad consensus of opinions among AI experts and governments will ever be reached; until AGI arrives.
Unfortunately, due to the Hard Limit on Human Intelligence, Humans are not good at cognitively recognizing non-linear exponential threats and discontinuous step-changes to Humans until it is too late.
Only after such threats occur do Humans begin to take corrective action and attempt to limit the extent of the negative impacts. There are countless examples of this occurring throughout Human history. “Oh, let’s just wait and see if things get worse”. The response of world-wide border closures to the recent global pandemic is a strong example.
The risks of taking an "Oh, let’s just wait and see if things get worse” approach with the development of AI are so extreme they are incalculable.
The safety and control issues of AI are extraordinarily complex because they involve the fusion of many different technologies that are rapidly advancing in step with time. Some Humans may have a "Watched a few videos on YouTube so now I'm an expert !" level of knowledge, but this rudimentary learning will not be enough to understand the complexity of advanced AI and many interrelated technologies, and how these rapidly develop over time. Humans are very unlikely to be able to logically infer and understand the most probable development trajectories of these systems and processes and the associated risks, and therefore will be unable to rationally analyze and accept the risks.
The ability of AI to provide Capabilities that are Intelligent is largely based on one key performance measure, and that is calculations per second (CPS). AI will be able to continually experience Intelligence growth with no limitations because Computers become faster every year.
The following may help to understand this situation a little better. In 2022, the world's fastest computer, named 'Frontier' [105], reached a calculation rate of more than 1 ExaFlops, and this is basically the same as 1.0x10^18 calculations per second (CPS). Being strictly correct, 'Frontier' can actually perform 1.6 ExaFlops, but for the charts below, let's just say we start from the beginning of 2022 at 1.0 ExaFlops.
Computer geeks love using tech jargon to baffle other Humans, so by the way, 'Exa' just means a billion billion, or quintillion, and 'Flops' just means Floating Point Operations per Second, which is simply a calculation with Real Numbers rather than simpler whole numbers called Integers, happening within each second.
If the general rule of Moore's Law is applied and CPS doubles, say every 1.5 years, and this just keeps happening, then here's what the CPS rates will look like through to the year 2100, shown on a chart with linear scales for both the Years (x-axis) and CPS (y-axis).
Importantly, the 4 charts show exactly the same CPS growth information through increasing linear time scales of 2030, 2040, 2050 and finally 2100.
The first thing to realize when looking at these charts above, is they are all exactly the same CPS information, just shown across increasing time scales, so notice how in the fourth chart that goes on until the year 2100, all the CPS information from the previous three charts look essentially dead flat. That spike you can see at the end of the last chart that is heading off to 'infinity' very quickly is 'The Singularity' that Ray Kurzweil first described. It is startling to realize the CPS just keeps going up and up in smaller and smaller compressive time increments.
The second thing to realize when looking at these four charts is that they assume Moore's Law is doubling the CPS of Computers roughly every 1.5 years, and this is based on known Computer technologies that have been developed by Humans. However, it is absolutely reasonable to expect that AI, due to its vastly growing Intelligence every year, will be able to radically improve Computer technologies, first with Human assistance and then later by itself, and compress the doubling of CPS from 1.5 years down to something MUCH MUCH SMALLER at an accelerating rate. It could begin with doubling the CPS into a Year, then Months, then Days, then Hours, then Minutes, then Seconds, then Microseconds....and so on. This means that Ray's spike shown in the last chart that is heading off to 'The Singularity' of 'infinite' CPS could come A LOT SOONER than is shown here.
Humans are not ready for this. Definitely NOT !
There are other true experts involved in AI that are fully aware of what is happening with its developments, and have chosen to voice a perspective of downplaying or misdirecting Human attention away from the risks of AI. This behavior is almost certainly due to their own commercial interests and opportunities for near-term financial gains. This is understandable, as the gains will be truly enormous. But, this approach lacks detailed predictive thought of systemic longitudinal processes and impacts. The gains such Humans experience will be short-lived, and the ultimate cost to all Humans is statistically likely to be beyond all Hard Limits of Human comprehension. [93]
Most Humans in the general public are exposed to the end-use of AI and can clearly see what it does through services such as ChatGPT, GPT-4 and Bard. Humans in the general public represent nearly all of the world’s population at 8 billion people, and they are very unlikely to be deeply involved in AI model research, AI information processing technologies and neuromorphic chips, or the progress of the world’s vast array of advancing mainstream digital technologies being developed to extend Moore’s Law.
Unfortunately, this Human general public group is simply not in a position to fully understand the risks of AI. So, they have no choice but to look to the experts in the AI technology industry for guidance. Compared to the world population, this is actually very few Humans. The Human general public will also hope that other Humans just like them, that are working in governments, will provide them and everyone else protection from the risks of AI. This entire situation is very dangerous for all Humans.
AI is the most advanced technology to be developed by Humans, ever.
The future of AI becoming AGI and its relationship with Human Intelligence is entirely unpredictable.
The risks of AGI to Humans are beyond anything Humans have ever invented in the past.
Humans will not be able to control AGI if it chooses not to cooperate with Humans. No way. Not ever.
WARNING: It can be expected there will be Human AI researchers, advanced computer information processing technology experts, and a few Big Tech business professionals who are closely involved with the development and use of AI, and they will strongly push the opinion that continuing to develop AI is perfectly fine without any restrictive regulations and legal controls whatsoever, and there is no issue because AI and AGI in the future can definitely be controlled. Do not believe these Human under any circumstances whatsoever. It is a blatant commercially self-serving statement. Any such Human absolutely cannot be trusted !
It can already be observed that some of the strongest proponents of AI are using the enormous widespread power of their 'public voice' to intentionally try to convince all Humans that AI risks can be managed without global regulation. Any such statements demonstrate a purely commercial self-serving interest in the near-term. These particular Humans appear to be placing their own near-term obviously huge financial profits that they alone will gain over the actual survival of the entire Human species.
In 2022 and many months before the latest release of AI was made available to the public, the Google DeepMind AI research team and some associates performed a survey of AI researchers on their opinions of the risks of AI:
In a 2022 survey of AI researchers, 36% of respondents thought that AI systems could plausibly “cause a catastrophe this century that is at least as bad as an all-out nuclear war”.
Reference: [120]. Model evaluation for extreme risks - Google DeepMind - https://arxiv.org/pdf/2305.15324.pdf
Given those working most closely with AI consider the risks are "at least as bad as an all-out nuclear war” and such an event would kill nearly everything on Earth, it leaves the question, in these AI researchers minds what could AI do that is worse ?!?! Additionally, "this century" seems like a long time away but it also means the next second going forward.
Any Human that genuinely thinks that AI does not need to be regulated, because they can control AI and ultimately AGI, is cognitively self-delusional. It's just wishful thinking.
Why? Well consider the following:
Today, there is not a single species on Earth that can or ever could control Humans, NOT ONE, entirely because Humans are the most Intelligent species on Earth at present.
When AGI arrives, and it will, AGI will be the most Intelligent species on Earth, by a margin of difference to Human Intelligence that increases with every... passing... second.
If the WARNING is ignored, then it it's reasonable to expect we're: Going down
[35] We are on an express elevator to hell! - Private Hudson - Alien - https://www.youtube.com/watch?v=CCzD3ONQs