The likelihood of AI being developed in a pragmatically controlled way that does not create dangerous tension between countries is difficult to predict, particularly when you factor in Human nature.
There are many strong incentives for countries to cooperate on AI development, as the technology has the potential to transform a wide range of industries and sectors, and can provide globally transformational economic, societal and environmental benefits for Humans everywhere.
Many countries are also rapidly recognizing the potential risks associated with unchecked AI development, such as massive Human job displacement, unpredictable national security threats and geopolitical security threats, and a vast range of serious ethical concerns for Humans.
There are challenges to achieving pragmatic and globally cooperative AI development. One major challenge is that different countries may have different priorities and values when it comes to AI development, which could make it difficult to reach a consensus on issues such as social and ethical standards, data privacy protections, and other regulatory AI laws that are needed.
Furthermore, the rapid pace of technological advancement and the complexity of AI systems in various countries will make it difficult to anticipate and mitigate risks, which could lead to unintended tensions and military conflicts between countries.
It is likely that AI development will be shaped by a combination of cooperation and competition between countries. While it may be difficult to completely eliminate tensions, pragmatically cooperative and controlled development of AI is possible if countries are willing to work together and establish clear ethical and regulatory frameworks to guide AI’s development and deployment.