By Shiniel Naidoo / Candidate Attorney / Mooney Ford Attorneys
In our previous article, we explored the growing concerns around data practices in industry. When we bring AI into the picture, these issues do not just continue, they escalate.
Globally, AI governance is increasingly shaped by economic and national security considerations. Developing countries face technological dependency and constrained participation, deepening global economic divides and governance/power asymmetries. Accordingly, many jurisdictions have since moved from a “race to AI” to a “race to AI regulation”.
The Competitive Dimension of AI Governance
The AI race is reshaping global power not through incremental innovation but through deep structural realignments. AI is not merely a productivity tool but a decision infrastructure and knowledge source.
The very features that make AI powerful also make its failures consequential. A biased algorithm deployed at scale can discriminate against thousands. A flawed predictive model in healthcare can affect life-or-death outcomes. A generative model can produce misinformation instantly and globally. The higher the leverage, the greater the regulatory stakes.
AI governance is no longer purely a domestic policy choice, it is a geopolitical positioning decision. Governments in developing economies face a difficult balancing act, they can either:
- Regulate strictly, and risk discouraging foreign investment or slowing innovation.
- Regulate lightly, and risk weak oversight, data exploitation, labour displacement, and digital rights harms.
Dimension 1: Speed in Innovation
Understanding the “AI race.” is simple: whoever leads in AI development gains economic, military, and geopolitical leverage. The concept of “first-mover advantage” has become central. Early market entry is one of the main factors contributing to the profitability and influence of these organisations.
In some jurisdictions, tech development companies are given free reign. Supported by political authority, they are legally allowed to conduct R&D and deploy novel/unregulated technologies without significant restrictions or oversight. This has been described as a deregulatory approach to support innovation and assert technological/economic dominance. Countries believe that early dominance in AI allows them to set technical standards, capture investment opportunities and more insidiously, concentrate data and informational power.
Dimension 2: Force of Regulation
New models, applications, and integrations are launched faster than most institutions can understand them — let alone regulate them. This creates a structural tension at the heart of AI governance. This tension has been described as the “pacing gap” — the widening distance between technological advancement and regulatory responses.
AI systems are iterated, scaled, and deployed rapidly. Legislative processes, however, are slow by design. For legal and compliance professionals, this means operating in a moving landscape where the rules are often reactive rather than anticipatory. The result? Legal uncertainty, patchwork compliance obligations, hollow digital/data rights and growing exposure to liability.
Dimension 3: To Uniformity or Not to Uniformity?
The South African Perspective
The Discussion Document of the AI National Government Sumit published in October 2023 lays out national AI planning by the Department of Communications and Digital Technologies (DCDT). The document is intended to serve policy formulation for a ‘National AI Plan’ with legal opinion suggested before acting based on government/national AI priorities.
On the topic of regulation, the Discussion Document highlights the need to operate within global ethical proposals on AI, while upholding state aspirations. Regulation should be aligned with global practices, responsive to economic and social aspirations and built on principles, ethics or human rights. It notes that the EU Artificial Intelligence Act (EUAIA) is ‘the first comprehensive piece of legislation in the global AI space and became the de facto global AI regulation’.
The EU’s Regulatory Frame Work
The EU’s protectionist strategy can be described as regulatory power used for global influence. This is achieved by leveraging what scholars call the “Brussels Effect” i.e. the idea that stringent EU rules often become global standards because companies prefer to avoid dual compliance obligations and want access to the EU’s large consumer market.
Importantly, the EU AI development market is miniscule in comparison to other jurisdictions. The EU does not currently dominate model development at the same scale as the US or China. Public funding instruments and coordinated digital strategies aim to grow internal capacity, but the EU’s primary competitive strategy is not aimed towards R&D but rather legal imperialism.
At the centre of its approach is the EUAIA, which establishes a risk-based regulatory framework. Extraterritorial impact occurs because the Act applies to providers placing AI systems on the EU market — regardless of where the technology was developed and/or the company is based. This effect of EU regional law allows for AI regulation that is given far greater weight, being capable of international authority and enforcement.
Dimension 4: Crystalising Choice
When global investors and technology providers favour flexible regulatory environments, states/companies may feel pressured to soften their AI governance frameworks to remain “competitive.” This dynamic can lead to:
- Regulatory shopping and tokenism of domestic law
- Information leaks and security breaches compromising national competitiveness
- Reduced accountability for multi-national tech corporations and enforceability for vulnerable stakeholders
- Limited bargaining power and available legal remedies in cross-border digital trade
In practical terms, organising bodies may sacrifice long-term sovereignty and public/private/individual interest protections for short-term economic gains. For professionals advising governments or businesses, understanding these geopolitical affectors is critical. AI compliance/procurement decisions today may determine long-term strategic alignment tomorrow.
If your organisation is navigating questions around cybersecurity governance, cross-border data risk, or the responsible adoption of AI tools, now is the time to formalise your approach. Clear, context-specific internal policies are essential to protecting your information, your clients and your competitive position. Should you have any queries in relation to the contents of this article or wish to develop a tailored policy strategy aligned with your organisational objectives and regulatory obligations, please contact us for guidance.
Photo by Markus Winkler on Unsplash


