South Africa’s financial regulator has warned that artificial intelligence presents both significant opportunities and serious risks for the country’s financial system, particularly in the area of information security.
In a report titled Artificial Intelligence in the South African Financial Sector, the Financial Sector Conduct Authority said AI could improve efficiency and innovation but also introduces risks to consumer protection, market conduct, financial stability and institutional integrity.
The report found that AI adoption is uneven across the sector. Banks lead usage at 52%, followed by fintech companies at 50%. Adoption remains relatively low among pension funds, at 14%, investment firms, at 11%, insurers, at 8%, and non-bank lenders, also at 8%.
Banks also accounted for the largest AI investments in 2024. More than 45% of banking institutions surveyed reported spending R30 million or more on AI-related initiatives.
The FSCA said AI can strengthen cyber resilience by detecting threats, identifying vulnerabilities and forecasting attacks through advanced data analysis. At the same time, it warned that cybercriminals are increasingly using AI to launch more sophisticated attacks that are harder to detect and prevent.
A key concern highlighted in the report is the growing reliance on third-party technology providers. The FSCA cautioned that concentrating AI capabilities among a small number of vendors could create systemic risk.
The regulator pointed to last year’s outage at Capitec, South Africa’s largest bank by customer numbers, which disrupted all customer channels after a software failure at cybersecurity firm CrowdStrike. The faulty update affected companies globally, including Delta Air Lines, which grounded about 7,000 flights over four days.
The FSCA said a similar failure at a major AI service provider could trigger cascading disruptions across the financial sector.
Another risk identified is the potential exposure of confidential customer data. AI models may reveal or infer sensitive personal information contained in training datasets, potentially breaching regulations such as South Africa’s Protection of Personal Information Act and the European Union’s General Data Protection Regulation.
The report also highlighted concerns about how AI models are trained. Risks include data poisoning, where training data is deliberately manipulated to distort outcomes, as well as embedded biases in datasets. In financial services, such biases could result in discriminatory outcomes, such as higher loan interest rates or insurance premiums for certain groups.
The FSCA stressed the importance of transparency, urging financial institutions to clearly explain AI-driven decisions. It said customers should be informed when AI is used in decision-making processes that affect them, to build trust and support regulatory oversight.
The regulator also noted the absence of a uniform AI governance framework in South Africa. International frameworks, such as those developed by the Organisation for Economic Co-operation and Development or the EU’s AI Act, are not legally binding on South African firms.
“AI systems can introduce new risks, including model risk, operational risk and cybersecurity threats,” the report said. It recommended that financial institutions develop comprehensive risk management frameworks, conduct thorough testing and validation of AI models, and establish robust incident response plans to address potential AI-related failures or breaches.
#South #Africa #Regulator #Warns #Poses #Cyber #Stability #Risks #Financial #Sector