There would be several likely consequences if a country was to strongly limit the development of AI by enacting new laws, without doing this in cooperation and coordination with other countries and allied global superpowers:
A country would fall behind in the development of AI and related technologies, putting it at an economic disadvantage compared to other countries that are actively investing in AI research and development. This could lead to a loss of competitiveness in a progressively growing number of industries that will come to rely on AI.
Talented Humans may leave the country to pursue R&D and general AI application work opportunities in other countries where the legal environment is more accepting. This could result in a loss of valuable Human Capabilities that weakens the country's overall scientific, technological and industrial Capabilities.
The country will miss out on the economic benefits that AI brings, such as increased productivity, new business models, and new specialized job creation. This will result in a slower rate of economic growth and reduced global competitiveness.
If a country limits the development of AI through oppressive laws, it will become significantly more vulnerable to Computer security cyberattacks and other types of security threats that leverage AI technologies. This could have serious implications for national security and Human safety.
If a country takes a unilateral approach to AI regulation without broad cooperative international consultation, it may create diplomatic tensions with other countries that are actively investing in AI research and development. This could weaken strategic relationships and alliances between countries, leading to increased geopolitical imbalance and turmoil.
To avoid negative consequences of unilaterally limiting AI, it will be far more effective to work collaboratively with other countries to develop a shared framework for strict AI regulation and sharing the benefits of advances in research and development.
But, unfortunately, as historical experience suggests, it normally takes something very bad to happen to Humans first, before Humans will want to cooperate like this.