The fundamental premise: AI is in reality a threat far beyond any single country's National Security, it is presently an entirely unmitigated threat to Global Security. It is necessary to establish a range of specific Methods to control AI because if left uncontrolled, its Capabilities pose an extreme risk as a technology that can be easily used to end the Human species, and therefore warrants a very rapid and powerful heavy-handed approach to control. This current situation of near-zero legislated control of AI development is totally unacceptable for every Human living in every country on Earth, and cannot be allowed to continue.
Looking beyond the Open Letter and anticipated debates, here are 12 reasonably pragmatic steps as initial discussion points for establishing a framework that provides Methods to control AI development in order to protect Humans against extreme threats.
The proposed Methods to control AI include:
a. initially established by UN and then spun out as independent NGO with global authority.
b. holds global mandate for every country on Earth that has any AI development whatsoever under any form of sovereign or Human control.
c. UN member countries, Non-UN member countries, non-nation states, and any groups working on AI, are mandatorily required to join GAIMA and comply with regulations.
d. mandatory annual funding by members of GAIMA, including very well-resourced global experts in domains such as: AI development and applications, control systems, communications systems, power systems, automation systems, advanced information technology, high performance computing, military technology, and international law.
e. GAIMA has legal, law enforcement, intelligence collection and analysis, and military power to implement extremely severe penalties for non-compliance.
f. countries, non-nation states, authoritarian powers that are demonstrably hostile to rules-based international order, and any other groups that develop AI, that present unacceptably high risks and refuse to join and/or comply with GAIMA regulations, will face highly targeted and destructive military action, following appropriate legal assessment and GAIMA global panel reviews in conjunction with UN Security Council final majority vote.
a. establish AI system Computer processor development technology limits, with processor types and quantities applied to any single AI application requiring GAIMA approval.
b. potentially restrict or outlaw quantum Computing for both AI model learning and AI model runtime (post-learning) applications.
c. for every AI application establish a maximum calculations per second (CPS) upper limit for any AI system, including single AI system and multiple aggregated AI systems. Perhaps with CPS kept at 1% of Computational Hard Limit of 1 Human Brain as defined by Kurzweil (ie. Upper limit = 20 quadrillion CPS x 0.01% = 0.2 quadrillion CPS).
a. limit the amount of energy that can be supplied to any single AI system.
b. implement mandatory emergency stop on power supply systems for every AI system including multiple serialized fail-safe mode including uninterruptible remote access and E-stop control by GAIMA.
a. limit the number of AI systems that can be interconnected together at any time.
b. multiple interconnected AI systems limits must not in aggregate exceed the CPS limits for a single AI system.
c. AI systems cannot batch process and offload CPS functions to any another remote AI System or other general Computer resources such as High Performance Computing datacenters without authorized GAIMA approvals.
d. mandatory fail-safe emergency network link breakage on every AI system communications interface, with no exceptions, including all wired and all wireless communications links of any type.
a. limit the different types of input information (eg. different sensor types) that can be supplied to a single AI system.
b. limit the total number of inputs permitted to be connected simultaneously to a single AI system.
c. communication networking of different AI systems with different types of input information must be approved by GAIMA.
d. communication networking of different AI systems that produce an increase in total aggregate input numbers, must be approved by GAIMA.
e. mandatory fail-safe emergency input information disconnection on every single AI system.
a. limit the different types of output information that can be supplied from a single AI system.
b. limit the different types of output information that can be supplied from multiple AI systems that are connected together through communication networking.
c. limit the number of total outputs permitted to be connected simultaneously from an AI system.
d. limit the number of total outputs permitted to be connected simultaneously from multiple AI systems that are connected together through communication networking.
e. communication networking of different AI systems that produce an increase in total aggregate output numbers, must be approved by GAIMA.
f. mandatory fail-safe emergency output information disconnection on every single AI system.
a. official classification of all AI models including all new classes of AI developed (eg. Tranformer), including submission of model algorithms for review and licensed approval by GAIMA.
b. registration of all AI models with GAIMA including new classes.
c. licensing of AI model usage on every AI system.
d. safety and ethics training and certifications on AI model usage.
e. annual auditing and certification re-approvals by GAIMA. (eg. similar to ISO / NATA testing for laboratories).
a. realtime reporting of AI applications on every AI system available to GAIMA.
b. realtime reporting of AI usage levels on every AI system available to GAIMA.
c. realtime reporting of AI model types used on every AI system available to GAIMA.
d. realtime reporting of AI Computer processing levels on every AI system is available to GAIMA.
e. realtime ability to shutdown any AI system from GAIMA using fail-safe controls.
a. nuclear AI application controls.
b. biological AI application controls.
c. chemical AI application controls.
d. robotic AI application controls.
e. military AI application controls.
f. general microprocessor and AI microprocessor development controls.
g. neuromorphic Computing system and software development controls.
h. quantum Computing system and software development controls.
a. strictly regulate and potentially outlaw the use of quantum Computers for AI model error optimisation and AI learning.
b. strictly regulate and potentially outlaw the use of neuromorphic processors for AI model error optimisation and AI learning.
c. strictly regulate and potentially outlaw the use of genetic algorithms for AI model evolutionary development on high performance Computing systems.
d. develop and institute a global equivalent to the International Traffic in Arms Regulations (ITAR) controls, for technologies including neuromorphic processors, quantum Computers, and emerging optical Computing.
a. RAIT - Ranked AI Type - to advise Humans on the AI model type they are interacting with, as officially classified by GAIMA.
b. RAICM - Ranked AI Computation Maximum - to advise Humans on the maximum AI Computational Capabilities they are interacting with, as officially classified by GAIMA.
c. HTAIR - Human to AI Ratio - to advise Humans on the ratio of Human to AI information content they are interacting with, as officially certified and classified by GAIMA. ie. 100% Human, 100% AI, or some range in-between such as Human response with AI support from RLHF (Reinforcement Learning Human Feedback) and associated amounts of Rewards and Penalties that an AI has received. There is a way to measure how much information an AI provides is purely AI generated, and how much information comes directly from Human feedback.
This is done by tracking the number of times an AI is given a reward or a penalty for a particular response. The more rewards an AI receives, the more likely it is that the response was purely AI generated. The more penalties an AI receives, the more likely it is that the response came directly from Human feedback. For example, if an AI is asked to provide a summary of a factual topic, and it provides a response that is accurate and informative, it is likely to be given a reward. If an AI is asked to create a story, and it provides a response that is creative and engaging, it is also likely to be given a reward. However, if an AI is asked to provide a response to a question, and it does not know the answer, and it simply copies and pastes the answer from a website, it is likely to be given a penalty.
So, by tracking the number of rewards and penalties an AI receives, it is possible to get obtain an indicative measure of how much information the AI provides that is purely AI generated, and how much information comes directly from Human feedback. Importantly, the amount of information an AI provides that is purely AI generated is constantly changing as it learns and improves. Therefore, an AI is able to generate more and more information on its own. However, an AI may always need some input from Humans in order to learn and improve.
d. LAISID - Licensed AI Supplier Identification - to advise Humans on the identity of the legally licensed AI supplier they are interacting with, as officially classified by GAIMA.
e. LAIA - Licensed AI Applications - to advise Humans on the complete set of AI applications they are able to legally access from a legally licensed AI supplier, as officially classified by GAIMA.
a. treat all highly serious GAIMA rule breaches with legal and enforcement equivalence to an act of terrorism. Yes, that's right, not joking. Just really think about it.
b. illegal individuals - rogue unlawful activity results in immediate police arrest and imprisonment, legal process, and extremely severe penalties for breaches including fines and harsh prison sentences.
c. illegal countries, non-nation states, and any groups - GAIMA and UN sanctioned preemptive military responses to limit breaches, including hitting illegal activity brutally hard and fast to destroy validated threats.
THE CURRENT SITUATION OF UNREGULATED AI CANNOT BE ALLOWED TO REMAIN AS IT STANDS TODAY BECAUSE THE EXISTENCE OF THE ENTIRE HUMAN RACE DEPENDS ON TAKING BOLD AND RAPID ACTION TO REGULATE THE DEVELOPMENT OF AI AND LIMIT POTENTIALLY EXTREME AI RISKS.
The steps such as those above also cannot be allowed to become locked in endless geopolitics (eg. like the Climate Change COP meetings) and discussions while Humans wait for broad consensus, multiparty legal agreements, and eventual grudging permission. The situation with AI is one where Humans must push through with sheer force and take immediate coordinated action, accept there will be some mistakes at first which are containable and correctable, and broadly cooperate in order to find a workable global solution for AI that can generally help Humans in the longer term. These controls on AI are entirely about protecting the entire Human race, so it seems logical that no Human can justifiably offer any sound reason to have a major problem with this approach.
On 16 May 2023, there was a US Senate Judiciary Subcommittee hearing on "Oversight of A.I.: Rules for Artificial Intelligence.” where Sam Altman, CEO of OpenAI, appeared and provided responses to some early concerns about AI. [101] [103]
Understanding is important, because ACTION is needed, and very quickly !!
On 22 May 2023, OpenAI released a statement expressing their views on the development of AI, AGI, and its development into what they call Super Intelligence, which is just AGI that has had time to develop further. [113]
The proposed cursory statements included some of the ideas suggested above, however OpenAI seek to remain unrestrained in some areas:
"We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits)."
It seems from the statement, including the freedom to keep AI development 'Open Source', in addition to the comment above, that OpenAI seeks to remain largely unrestrained, and seeks to continue in their ability to commercially compete and innovate with AI models and their use without truly professional strict global management controls and regulatory oversight. This apparently seems to indicate that OpenAI prioritizes commercial interest over the protection of all Humans on Earth.
For the development of AI and its risks, OpenAI's suggested approach is far too loose and is not acceptable.
A simple question: What use is the billions to possibly trillions of dollars a Company or Owners will personally make, if you and everyone else with no exceptions ends up dead ?
A simple answer: No use whatsoever.
There must be tight management controls and regulatory oversight, including licenses and audits, as these provide more precise information and better control of AI development on an ongoing basis, and provides an ability to catch potentially emerging AI problems earlier to limit them, rather than being uninformed and enabling the problems to escalate.