When Angola’s lawmakers drafted a proposed Artificial Intelligence Act late last year, they included a provision that could reshape how AI regulation works far beyond the country’s borders.
Under the draft, any AI system that “affects the public interest of Angola or the rights of Angolan citizens” would fall under Angolan jurisdiction, regardless of where the system was built, where it operates, or where its parent company is based. If it impacts Angola, the rules would apply.
The provision has received limited attention outside specialist circles, but its implications are significant. While the European Union’s AI Act has influenced Angola’s approach, the draft extends the logic further. The EU regulation applies to AI systems placed on the market or put into service within EU borders. Angola’s draft takes a broader view, asserting authority over systems that affect Angolans anywhere.
That creates difficult questions for global technology companies. A large language model trained in California, hosted in Ireland and accessed by an Angolan citizen in Lisbon could fall under Angola’s reach under this standard. The same would apply to recommendation algorithms that shape what Angolans see on social media, regardless of where a platform is headquartered.
Supporters of strong regulation may view this as an attempt to assert “digital sovereignty,” but it also signals a shift in the jurisdictional boundaries technology companies have relied on for years.
The “Brussels effect,” Africa edition
Regulation scholars often describe the “Brussels effect,” where EU rules become global standards because companies choose to comply broadly rather than maintain separate systems across markets. Angola appears to be betting on a similar outcome.
With 36 million citizens and a growing digital economy, Angola is not a market global technology firms can easily dismiss. The draft law’s underlying message is clear: comply with Angolan standards or risk losing access to Angolan users.
Angola’s proposal also arrives as other African countries move toward their own AI frameworks. Ethiopia has asserted “digital sovereignty” at the G20. South Africa is developing sector-specific AI rules. The African Union is promoting its Continental AI Strategy. If multiple African economies adopt similar extraterritorial provisions, companies could face overlapping and potentially conflicting requirements across the continent.
The challenges go beyond compliance. Extraterritorial claims can trigger legal conflict, especially if national requirements collide. The article notes unresolved questions such as what happens when Angolan law demands transparency that U.S. law prohibits, or when rules in Kenya and South Africa diverge. International law offers limited clarity, since many treaties governing jurisdiction were written for physical trade, not AI systems operating across digital networks.
A training data provision draws scrutiny
The draft law includes another provision likely to draw attention: a “reasonable use” exception for AI training data that would permit the use of copyrighted material under certain conditions.
The language is permissive by international standards and suggests lawmakers want to reduce barriers for domestic AI development, even if it creates friction with foreign intellectual property holders.
AI systems require large volumes of training data, much of it copyrighted, including news articles, books, academic papers, images and code. In the United States and Europe, the legality of using copyrighted material for training without permission remains contested, with lawsuits underway and licensing deals emerging.
Angola’s approach offers an alternative path. The article suggests the country could become a more attractive base for AI development, especially for startups and researchers who cannot afford licensing costs in major markets.
But the risks are also clear. Publishers, news organizations and content creators may challenge the use of their work without permission, potentially leading to trade disputes or diplomatic pressure. The article also raises a practical limitation: if models trained under Angola’s permissive standard cannot be exported to stricter jurisdictions, their commercial value may be reduced.
A shift in the balance of power
At its core, Angola’s draft law represents a broader attempt to reshape who sets the rules in the global AI economy.
For decades, digital governance has largely been driven by Washington, Brussels and Beijing, with smaller countries often acting as rule-takers. Angola’s draft challenges that dynamic by asserting that it can set standards binding major technology companies, define “reasonable use” of training data, and bring foreign-designed algorithms under Angolan courts.
Whether those ambitions hold will depend on factors beyond legal language, including whether other African nations adopt similar provisions, whether technology companies decide compliance is cheaper than exclusion, and whether Angola can enforce the rules it proposes.
The long-term outcome remains uncertain. But as AI becomes more pervasive, the underlying choice will face every country: accept rules written elsewhere, or write its own. Angola has signaled where it stands, and the article suggests others are watching closely.
#Angolas #Draft #Law #Claims #Global #Reach #Raising #Stakes #Tech #Firms