By Ashton Naidoo / Partner / Mooney Ford Attorneys
Artificial intelligence did not enter the South African legal profession with a well crafted court directive. There was no legislative trigger, and no regulator issuing guidelines. It arrived informally, through everyday use. Lawyers started asking AI tools to reword indemnities, summarise lease agreements, and polish clumsy correspondence. At first, it felt like convenience. It sounded accurate. It looked professional. That illusion lasted until someone asked whether any of it was actually right.
In conversation with a longstanding client in the commercial property sector, they mentioned that they had recently used an AI tool to draft a response to a section 345 notice. Theyadvised that it produced a three-page answer that appeared legally sound, and that they had forwarded it directly to the opposing attorneys. Only after receiving a follow-up demand did they seek legal advice, at which point it became clear that the AI-generated response had incorrectly referenced statutory timelines, relied on an outdated version of
Act, and failed to properly address the actual contents of the original notice. The client admitted that they had not read the full output before sending it. They had relied on the fact that it sounded formal and made legal references.
That experience is not isolated, and it speaks to a growing issue. AI tools are not lawyers. They are not researchers. They are not trained on South African law or tested against our courts’ jurisprudence. What they offer is speed, syntax, and surface-level confidence. They do not offer legal reasoning. They cannot be cross-examined. They do not distinguish between persuasive authority and bad law. Yet they are being used, every day, in ways that carry professional and commercial risk.
The South African courts have already encountered the consequences. In Northbound Processing (Pty) Ltd v SA Diamond Regulator, the Gauteng High Court reviewed submissions that included AI-generated authorities that did not exist. In Mavundla v MEC for Cooperative Governance and Traditional Affairs, KwaZulu-Natal, a similar incident occurred, and the court issued a clear warning that reliance on AI-generated material without verification may amount to professional misconduct. These are not hypotheticals; these are reported judgments that now form part of our case law.
Despite these developments, South Africa has no legislation that regulates artificial intelligence in any specific way. The Artificial Intelligence Policy Framework, released by the Department of Communications and Digital Technologies in 2024, sets out principles for the ethical development and use of AI. It covers issues such as transformation, skills development, digital inclusion, and fairness. However, it is not binding. It does not create offences, confer rights, or establish a regulator with enforcement powers. It is a vision statement, not a statute.
The closest we come to a legal constraint is section 71 of the Protection of Personal Information Act, which prohibits decisions that have legal or significant effects being made solely on the basis of automated processing. The section provides for limited exceptions and requires that affected individuals be given the right to contest the decision and request human intervention. While the provision appears to address AI-driven decisions in theory, it has not yet been tested in court or enforced by the Information Regulator in any publicised matter. Until it is applied in context, it remains uncertain how far its protection extends.
What is now required is a legislative framework that recognises the growing role of AI in legal and commercial processes and that provides certainty for those who use it, clarity for those affected by it, and accountability for those who rely on it without supervision. The use of AI in practice is not inherently negligent. It becomes problematic when practitioners accept the output as accurate without applying professional judgment. The courts have already made it clear that the responsibility for verification cannot be outsourced.
Until proper regulation is in place, the profession must apply the same principles it always has. If you write it, you must stand by it. If you send it, you must be prepared to explain it. If you rely on it, you must be able to prove that it is legally sound. Artificial intelligence is a tool, not a defence.
Photo by Alex Bachor on Unsplash