Artificial Intelligence (AI) is transforming industries across all platforms, but it’s essential to remember, as former CEO of RMB, Ginni Rommety, puts it: “AI is not a substitute for human intelligence, it is a tool to amplify human creativity and capability” However, as the volume of critical decisions and solutions provided (or suggested) by intelligence systems continues to surge, a necessary question of legal liability becomes more pressing. What happens when an AI system causes harm and who is ultimately accountable: developers, employers, or platforms?
Historically, liability was based on human negligence or intentional misconduct. In other words, through a human failure, whether through an act, error or omission. However, an AI system operates without consciousness and therefore is not, considered a legal person. This makes it difficult to apply existing law directly and creates the opportunity to explore how AI-driven decision-making tools used in business create new layers of liability.
South Africa does not have laws or regulations that specifically govern AI. Instead, it is currently regulated by general legislation, including the Protection of Personal Information Act (POPIA). Section 71(1) protects data subjects from being subjected to automated decision-making that results in legal consequences for the data subject. One example of automated decision-making is where an AI system profiles credit applicants, leading to discriminatory decisions based on arbitrary or biased selection criteria. Section 71(1) prohibits such decision making based solely on the profile created by an AI system. The Act allows exceptions, but the baseline discourages exclusive reliance on the intelligence system.
It creates a risk for firms if appropriate procedures to prevent sole reliance on automated-decision-making are not implemented and could lead to claims for breach. It could also expose directors for failing in their duty to the company to ensure implementation.
Until AI legislation has been enacted, South African courts will continue to apply traditional legal principles directed at natural and juristic persons. In one matter, Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others (7940/2024P) [2025] ZAKZPHC 2; 2025 (3) SA 534 (KZP) (8 January 2025), the legal firm representing the appellant faced criticism, financial penalties and was referred to the Legal Practice Counsel for misleading the court. It was found that the firm used AI as a replacement for its own legal research, and that the legal citations quoted in support of a key issue in dispute were inaccurate, with only two out of nine cited cases found to exist and those two matters did not support the position that they purported to. The court upheld the doctrine that professionals are held to higher standards and remain responsible for the output regardless of the tools used, and that professionals have a duty to validate their work. To this end, the Judge indicated that not doing so is “irresponsible and downright unprofessional”.
More recently, in the matter of Northbound Processing (Pty) Ltd v The South African Diamond and Precious Metals Regulator (Case Number: 2025-072038), the court reinforced the zero-tolerance approach to fictitious citations in court papers. In this matter, the Acting Judge discovered that Northbound’s papers contained fictitious case citations which appeared to have been generated by AI. Despite Northbound’s junior counsel being relatively forthright, apologetic and attempting to distinguish the facts from the Mavundla matter on, among other things, the lack of intent to mislead the court and senior counsel, not relying on the “hallucinated” cases in oral argument, the outcome remained the same a referral to the Legal Practice Council for investigation. It appears that the judicial intent is to ensure that professional duties and standards remain attached to the person, irrespective of the use of AI.
Regardless of intention or mitigating circumstances, our courts have been clear with their approach, that a fundamental breach of professional duty would arise when a legal practitioner refers the court to fictitious case law, and the use of AI is not an acceptable excuse. It is a reasonable inference that this principle would be transposed onto other professions and would, simply put, yield the same conclusion. It remains the ultimate responsibility of the professional to ensure the authenticity of an AI generated response. The professional would be held liable for a failure of an AI generated response and not the intelligence system itself.
Despite this key principle, AI technology is rapidly advancing and we’re likely to see integration of such systems into professional decision making, exposing professionals to claims where the system’s incorrect output causes harm to others, leading to potential liability. It is essential that professionals consider the risks associated therewith and discuss any risk transfer mechanisms (such as Professional Indemnity PI cover), subject to policy terms and underwriting criteria with their intermediary.
Article written by Omar Ismail – Claims Specialist
#Liability #Age #Whos #Accountable