South Africa’s financial regulators have released the country’s first comprehensive look at how artificial intelligence is being used across the financial sector, outlining early adoption trends, investment plans, risks and emerging regulatory needs.
The Financial Sector Conduct Authority and the Prudential Authority published the joint report after surveying institutions between October and December 2024. The voluntary survey drew about 2,100 responses from banks, insurers, investment firms, pension funds, payment providers, lenders, fintechs and other operators. Of all respondents, 220 — or 10.6 percent — said they currently use AI.
The report focuses on banking, insurance and investment firms, which together hold the largest share of sector assets.
Adoption and investment trends
Banks lead AI adoption, with 52 percent of banking respondents reporting deployed systems. Payment providers follow at 50 percent. Retirement funds reported 14 percent adoption, while insurers and lenders had the lowest uptake at 8 percent each.
Investment levels vary sharply. Nearly 45 percent of banks using AI planned to invest more than R30 million in 2024. By contrast, 62 percent of investment providers and 41 percent of insurers expected to invest less than R1 million. Most nonbank institutions reported modest or exploratory investment plans.
Current AI use
Traditional AI use cases are concentrated in operational efficiency and risk mitigation. Institutions use AI for document analysis, workflow automation, decision-support systems and fraud detection. Banks, lenders and payment firms report significant use of AI in credit scoring and underwriting.
Future plans include real-time fraud detection, cybersecurity analytics and enhanced monitoring for money laundering and terrorism financing. Insurers aim to expand AI for underwriting and claims management, while retirement funds and investment firms plan to explore applications such as portfolio optimization and risk modeling.
Generative AI use
Generative AI adoption is at an earlier stage. Current uses include drafting documents, summarizing reports and generating presentations. Some firms use GenAI for marketing or client communication.
Planned uses include customer-facing chatbots, automated service channels, risk scoring, report generation and expanded workflow automation.
Risks and constraints
Data privacy and protection emerged as the most frequently cited risks, especially under South Africa’s Protection of Personal Information Act. Institutions also flagged cybersecurity vulnerabilities, model bias, poor data quality and the potential for consumer harm from inaccurate or opaque AI outputs. GenAI raises additional concerns about misleading content, intellectual property issues and responsible use of large language models.
Regulatory requirements, talent shortages, limited internal expertise, transparency challenges and legacy systems were named as major barriers to adoption. Many firms said they lack the resources or skills to scale AI responsibly.
Governance gaps
The report found that governance frameworks remain uneven. Many institutions rely on existing risk-management structures and have not developed dedicated AI oversight mechanisms. Respondents highlighted the need for clear accountability rules, human oversight, model validation and continuous monitoring. Tools such as SHAP and LIME were cited as useful for explainability.
Institutions also stressed the need for clearer regulatory expectations around consumer disclosures, especially when automated decision-making is involved.
Existing regulatory approach
South Africa currently regulates AI-enabled automated advice primarily through existing laws — including POPIA and the Financial Advisory and Intermediary Services Act — rather than AI-specific rules.
POPIA’s provisions on automated decision-making, aligned with the EU’s GDPR, restrict decisions based solely on automation that have legal or significant effects unless specific conditions are met. These include contractual necessity, legal authorization or explicit consent. POPIA also requires mechanisms that allow individuals to make representations or request human review.
Under the FAIS Act, providers offering automated advice must meet standards on human oversight, internal controls, algorithm testing, resource adequacy and governance. The General Code of Conduct applies equally to digital tools, including robo-advisers.
The forthcoming Conduct of Financial Institutions Bill is expected to strengthen oversight of digital innovations and reinforce consumer protections.
The FSCA and PA said the report will form the basis of a discussion paper and further engagement with stakeholders on regulatory and supervisory questions.
#Regulators #Release #Snapshot #Adoption #Financial #Sector