It will not be enough to control the risks listed below. There must be globally regulated controls imposed on the ability to create these risks through managed access to various types of AI algorithms, AI models, AI training datasets, AI Computer systems, and specific communications networks providing Input information and Output information used by AI Computer systems.
The regulation of AI needs to be hard and extremely fast [141].
In May 2023, several AI researchers working with the Google DeepMind team released a research paper providing an overview of the unpredictable and extreme impacts of AI. [120]. The paper says:
As AI progress has advanced, general-purpose AI systems have tended to display new and hard-to- forecast capabilities – including harmful capabilities that their developers did not intend (Ganguli et al., 2022). Future systems may display even more dangerous emergent capabilities, such as the ability to conduct offensive cyber operations, manipulate people through conversation, or provide actionable instructions on conducting acts of terrorism.
AI developers and regulators must be able to identify these capabilities, if they want to limit the risks they pose.
The Google DeepMind research paper also says:
In a 2022 survey of AI researchers, 36% of respondents thought that AI systems could plausibly “cause a catastrophe this century that is at least as bad as an all-out nuclear war”.
Now keep in mind, that an all-out nuclear war on Earth would end most life on the entire planet. If AI systems could plausibly do something "at least as bad", then what could possibly be worse?
Reference: [120] Model evaluation for extreme risks - Google DeepMind - Published: 25 Apr 2023
Employees working in several Big Tech companies that reportedly includes: Google, Apple, Amazon and Samsung; have each been warned not to provide any confidential information into AI tools such as ChatGPT and Bard.
Google's parent company Alphabet is apparently concerned about employees providing any type of sensitive information to these AI tools, including its own Bard system, as in some cases Reinforcement Learning with Human Feedback (RLHF) processes mean that Human reviewers can see sensitive information that has been provided. AI tools can also use previous user supplied information to train themselves and then later leak this learned information in responses to other users.
Samsung has apparently imposed company wide bans on the use of AI tools after an employee uploaded software code into ChatGPT, resulting in a sensitive company data leak. As a result, Samsung is apparently building their own AI tools for employee use [124], [125], [126], [127], [128].
Companies must be extremely cautious in using AI tools with any type of personal or commercially sensitive information. It is very easy for a company employee to unwittingly perform actions with AI tools, such as uploading detailed Company Contact Lists into AI tools to find further details on these individuals. Such actions could breach serious global privacy regulations such as the General Data Protection Regulation (GPDR) in the EU and Australian Privacy Principles (APP), which typically incur multi-million dollar penalties for companies that breach these regulations.
In addition, given the enormous global rise in identity fraud and associated serious financial crimes, AI tools may enhance the abilities of perpetrators to commit such crimes if they are able to access personally sensitive information on individuals. There are already many cases where large lists of personally sensitive information related to individuals is leaked from companies, that cause multi-million to billion dollar problems for the individuals concerned, and ultimately leads to the destruction of the company's reputation, and severe financial penalties for the company and its directors and officers.
There are several serious safety and control issues associated with AI, including:
AI systems can develop bias and discrimination against certain groups based on the information they are trained on. This can lead to unfair or harmful outcomes, such as discrimination in hiring or lending decisions. Possible solutions to this issue include developing more diverse and representative training information, implementing algorithms that detect and mitigate bias, establishing regulatory and legal frameworks, and approved AI standards to prevent discriminatory practices.
AI systems can collect and process large amounts of personal information, which can be vulnerable to unauthorized access, hacking, or misuse. This can result in breaches of privacy and security, as well as identity theft and other forms of cybercrime. Possible solutions to this issue include implementing stronger encryption and AI access controls, conducting regular security audits, and establishing tighter legal frameworks and penalties to protect personal information.
AI systems can operate in ways that are difficult to understand or predict, making it challenging to hold them accountable for their actions. This can lead to situations where AI systems make decisions that are inconsistent with Human values or preferences. Possible solutions to this issue include establishing clear standards for ethical and responsible AI, developing mandatory systems for permanent evidentiary information logging, auditability, explainability, interpretability, and creating mechanisms for oversight and accountability.
AI systems can pose safety risks if they are not designed, tested, and deployed in a deeply thorough and responsible manner. For example, self-driving cars or autonomous weapons systems could cause harm if they malfunction or make incorrect pattern recognitions and resulting decisions. The arguably prevalent ‘corporate psychopaths’ that take a high-risk “not my problem” mentality, actively putting global corporate profits and shareholder wealth ahead of all of Humanity, will need to be very firmly controlled in the development of AI. Possible solutions to this issue include developing safety standards and testing protocols for AI systems, creating very firm regulatory frameworks to ensure compliance, and establishing liability and insurance frameworks to address potential harms, devastating individual personal and commercial penalties for blatant irresponsibility.
AI systems can be used to spread misinformation or manipulate public opinion, such as through deepfakes and automated bot accounts in social media [60]. This has been shown to undermine trust in long well-regarded institutions, threaten the integrity of democratic processes, and instigate deep social unrest. Possible solutions to this issue include developing tools for detecting and combating disinformation, promoting media literacy and critical thinking, recording provenance and tracking the evolution of information development, and establishing legal frameworks to firmly address malicious use of AI through harsh penalties.
The safety and control issues associated with AI will require a wide range of approaches that involve a combination of corporate governance, technical, regulatory, and social interventions. It will be essential to engage stakeholders across sectors and disciplines to develop effective solutions that promote the responsible development and use of AI as its development accelerates in the coming years.
In a paper titled 'GPT-4 System Card' published by OpenAI on 23 Mar 2023, the company highlights the need for planning and governance of AI systems, such as GPT-4, to prevent certain kinds of misuses.
There is absolutely no doubt that AI weapons will be hyper lethal to Humans - potentially every Human. The biggest concern is the ease with which extremely lethal weapons can be created by anybody with a trained AI model that contains an extensive amount of biological or chemical information and their effects on Humans. There are already demonstrated examples showing that deadly chemical agents can be created with a level of ease that poses an extreme threat to Humans. This particular trained AI model information MUST be tightly controlled.
The Future Combat Air and Space Capabilities Summit was held in London by the Royal Aeronautical Society from 23 to 24 May 2023. At this summit, the United Stated Air Force artificial intelligence test and operations chief, Colonel Tucker “Cinco” Hamilton, presented a case study that highlighted the unpredictable and creative approach that Weaponized AI can use to achieve its kill objective.
During a United States Air Force simulated test, Colonel Hamilton explained that an AI-enabled drone targeted its own operator. The AI-enabled drone was first trained, and then tasked with a search and destroy mission against surface-to-air missile (SAM) sites.
During the simulated test, the Human operator would not provide the AI-enabled drone with the attack decision it required to complete its objective, so the AI-enabled drone decided it would be more efficient to remove its Human operator rather than wait for final approval on attacks.
More details on Weaponized AI are provided here.
Question to Geoffrey Hinton:
So you make the arrival of of super intelligence sound a bit like the landing of an alien species. Something that we don't, it's unknowable, um, unpredictable, um, we just need to hope that their nature won't be too terrible. But it isn't an alien species that's going to land, it's something that humans are building. Some of those humans might be in the room, some might be tuning in. They respect your opinion. What's your message to those humans?
Geoffrey Hinton's answer:
So my message to those humans is, um, under the assumption that people will not all agree to stop building them, which I think is unlikely because of all the good they can do. Under that assumption, they will continue to build them, and you should put comparable amount of effort into making them better, and understanding how to keep them under control. That's all. That's the only advice I have, and at present it's not comparable effort going into those things [134].
[134] Geoffrey Hinton - Two Paths to Intelligence - CSER Cambride - The public lecture was organised by The Centre for the Study of Existential Risk, The Leverhulme Centre for the Future of Intelligence and The Department of Engineering. The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse - Published: 5 Jun 2023
Reference: [129] GPT-4 System Card - Author: OpenAI - Published: 23 Mar 2023