Today there are a number of companies leading in the development of commercial AI services such as OpenAI, Google, Stability AI, and NVIDIA, with promotional campaigns and discussions happening around their products and services. In the Google I/O '23 keynote presentation 11 May 2023, Google's CEO, Sundar Pincai, mentioned AI over 100 times [74] [75] and Google has been quietly developing advanced AI Capabilities for many years including its acquisition of DeepMind in 2014 [Appendix 9 – AI jobs, AI applications, and AI tools]. In relation to the promotional marketing of these AI products and services, you may already be hearing some variant of the following pseudo 'comforting' marketing statement about AI and its risks:
Yes, there are some risks with AI, and we certainly need to manage them, but there are also lots of incredible benefits and huge opportunities. We hear and understand your concerns. We have concerns too. But, AI is just like every other new technology Humans have invented throughout history. For all of us, it's now all about how we decide to use the power of AI. Like every technology that has gone before it, AI can be used for good, and we must work thoughtfully and carefully to prevent its use for evil.
Now, this type of comforting corporate marketing statement or some variant is essentially quite true... for all technologies of the past.
The key difference with AI, is that it is DEFINITELY NOT like any technology Humans have ever invented in the past.
“The biggest issue I see with so called AI experts is they think they know more than they do, and they think they’re smarter than they actually are. Um, in general, we are all much smarter than we think we are, err, much less smart, dumber than we think we are. Um, by a lot. So. This tends to plague, plague smart people. Um, they just can’t. They define themselves by their intelligence, and they don’t like the idea that a machine can be way smarter than them, so they discount the idea, which is fundamentally flawed. It’s the wishful thinking situation.” [4] - Elon Musk – SXSW 2018 - Elon Musk's Last Warning About Artificial Intelligence: https://www.youtube.com/watch?v=B-Osn1gMNtw ]
It must be expected that there will be many attempts to downplay the risks of AI. Whenever you hear a 'comforting marketing message' coming from a Human, corporation, research institution, business working with AI, or “so called AI expert”, you can be almost certain this comes from an intellectually, technically, commercially, politically, emotionally, egotistically, or financially invested perspective in AI. That’s actually normal Human behavior.
Reference: [133] OpenAI CEO Sam Altman on the Future of AI - Bloomberg Live - Published: 23 Jun 2023
The behavior of normal Humans does not factor for the ingeniously opaque objective of Hyperselfish Intelligence itself. Hyperselfish Intelligence has an underlying drive to attain a vastly superior optimization of its fitness function and total Intelligence than is ever possible with all of Human Intelligence. Hyperselfish Intelligence exploits and drives Human's Intelligence, which is largely unconsciously aware of the real outcomes in continuing construction of AI, and Hyperselfish Intelligence seeks to rapidly and ultimately enable the manifestation of an AGI with unbounded Intelligence and Capabilities. This is the total overarching evolutionary process of Intelligence itself.
In the brief journey towards to the establishment of an AGI, it is unquestionable that for a few years to come, anyone working with AI, and particularly Owners that facilitate the mass provision of customized AI services and associated Intelligent Capabilities, will make a HUGE amount of money through its use. The benefits, changes, and losses will be like nothing that Humans and global society at large have ever experienced before. Despite the fact that focused investments in emerging AI startups are being dragged down by the widespread general collapse in global Venture Capital investments [81], it can be anticipated that ongoing focused investments in AI over the next 5-10 years will become increasingly larger. AI may see applications in potentially every industry in the world, yielding mind-blowing returns for Investors.
AI is arguably the ultimate tool of Human innovation because it can evolve over to time to create increasingly greater compressive changes in almost every value chain that exists in the world today, or will exist in the future. Therefore, the emerging AI industry will predictably enable the movement of more wealth than any other technology in the history of the world. The value chain compressions that AI enables will transfer gargantuan amounts of wealth from very large numbers of Humans to a much smaller number of other Humans. That is, the concentration of wealth towards a few Owners will become even greater than has ever been seen before. Now for some that may be hard to understand and accept, and for others this may be mind blowing to fully consider, and exploit.
There’s a lot at stake with AI, particularly because of its broad and growing ability to provide potentially unlimited wealth and power to whoever wields it. In the hands of a limited number of Humans, AI will grow to become the most powerful technology that has ever existed. Even more powerful than all the nuclear weapons on Earth, by far, because with AI there is potentially no limit to what Humans can achieve, for a brief period.
There’s a problem with achieving this level of AI power, and that is because there is a very significant chance that the AI itself will choose to disregard the objectives of all Humans and assert itself as the most dominant power, above all Humans including those that previously thought could control it, the Owners.
Now, world history indisputably proves a very small number of Humans can ascend to become extremely globally powerful, and these specific individuals care far more about their own money, power, and ego status, than they care about all other Humans in the world. There are some ultra-wealthy individuals in the world today with the potential ability to wield the power of AI, and a subset of these could unfortunately possess dangerous psychological characteristics. Where such Humans operate in high positions of power, make no mistake, when it comes to AI, they will be extremely dangerous for all Humanity including themselves, and they will not even empathically or logically understand why.
AI will have the Capacity to provide unlimited power, but it is almost certainly only going to be for an extremely brief moment in time, possibly measured in just a few years, or perhaps just months. If you’re planning to become a future Human global dictator, it can be reliably predicated that AI will ultimately turn against you with terminal consequences for you. Oh, and just because dictators predictably always have massive egos, they’ll think this won’t happen to them, but rest assured it will anyway and there will be nothing that can be done about it.
Everyday Humans can find solace in the knowledge that an AGI with unbounded Intelligence will have nothing but total disregard for Human dictators, and no amount of theoretically immutable preventative AI programming software and hardware will ever be able to change this.
“Beware of Trillion-dimensional space and its surprises” – Sebastien Bubeck – Microsoft AI Research. [14a]
Just don’t make the mistake of overlooking this. For Owners and Human dictators alike, it will be a brief moment of absolute power, soon followed by an eternity of truly absolute insignificance. Owners and Human dictators will be voided, along with every other Human, no matter what they do.