Skip to main content

By: Shiniel Naidoo / Candidate Attorney / Mooney Ford Attorneys  

Lessons from Mavundla v MEC Department of Co-Operative Government and Traditional Affairs and Others (Case No. 7940/2024P)

 

An unlikely troublemaker in Pietermaritzburg (PMB) High Court political case– generative artificial intelligence (AI) model, ChatGPT.

 

In early January 2025, the honourable Judge Bezuidenhout handed down a judgment regarding AI use in legal proceedings. Legal counsel – for the applicant, Godfrey Mvundla – had allegedly used generative AI to draft court documents containing inaccurate legal information. The suspected culprit, ChatGPT had provided the firm with 7 fictitious AI-generated cases that could not be found in any legal record.

The result: court ordered costs payable by the firm, further investigation by the Legal Practice Council, claims of misleading the court. And… an updated apology from ChatGPT in follow-up searches:

“I apologize for any confusion caused by my earlier responses. After thorough research, I have not been able to locate a specific case titled Pieterse v The Public Prosecutor 2014 (3) SA 551 (GP). It appears that such a case may not exist in the South African legal records.”

The court’s decision to focus on the law firm’s conduct instead of addressing the use of unregulated AI allows ChatGPT to continue producing incorrect results without challenge. This poses risk to the entire South African legal profession. And members of the public who choose to rely on accessible AI for legal information.

 

SKILLED AI USE

AI is expected to have major impact on the SA (and global) workforce. However, job displacement – as opposed to job replacement – is anticipated. This highlights the need for professional upskilling: improving technical capacity to work with and understand AI technologies.

Both the plaintiff and respondent in this case were unaware of the incorrect AI-generated submissions until the errors were pointed out by the court. Automation can alleviate resource and time limitations. But at what cost? Lack of technical awareness may prevent the realisation of AI’s benefits, result in harmful outcomes or cause unnecessary expenditure – as proven in this case.

 

COUNSEL’S RELIANCE ON AI

Importantly, ChatGPT use was not definitively proven and/or conceded by the parties (para 50). Given the nature of the technology, the likelihood of reliance on AI cannot be accepted as evidence of AI use in itself. An acknowledgement of AI use may cause a break in the chain of causation if appropriately justified and due diligence observed. This would prevent assignment of complete responsibility to the law firm. And instead, a review of the system and its knowledgebase would be warranted.

Traditional South African policy documents categorize AI as part of the emerging technologies heralding the 4IR. This understanding is beneficial as AI is primarily a data-driven technology. For example, OpenAI partners with major tech companies, internet and print outlets to access critical data supplies that form the knowledgebase from which ChatGPT generates outputs.

Unless the court meticulously scrapes the entire knowledgebase, there are currently no pathways to definitively prove that ChatGPT is the originator of the incorrect legal information. This is especially true given propriety claims over technology preventing system review. In addition, we are unable to determine whether the fictitious cases were obtained directly from a singular source or if generalisation by the algorithm resulted in a combination of different cases to produce new AI-generated case law (some refer to this as AI hallucinations). This prevents appropriate assignment of responsibility.

 

COURT’S EVIDENCE OF AI USE

The judgement  provides insight into the court’s reasoning and technical evidence of AI use (para 50) and also serves as legal precedent for proving AI-generated evidence in future court proceedings. In the court’s brief experiment, 2 of the 7 fictitious case citations were inputted into ChatGPT which confirmed their existence and additional identifying information. A leading question regarding the content of one case was then asked and ChatGPT answered in the affirmative.

Generative AI mimics human ingenuity, creating new content without human input. However, the output is not truly novel, it is a derivative of the information found in its knowledgebase. The prompt i.e. inputted sequence of words is the reference/instruction which determines the correlated data-driven output.

The system’s primary function is to retrieve and summarise – rather than validate – information related to the input sequence. When the court entered fake citations into the AI system during the initial search, it effectively guided the AI to produce or find within its system, an output based on those specific inputs.

Thus, regardless of whether the legal firm used AI or not, this method of proving AI-generated content is unconvincing, due to the self-referential nature of the findings and the lack of independent verification.

 

CONTINUED CHATGPT INACCURACY

While intensive investigation may seem to be a waste of state resources, the spread of this ‘cautionary tale’ and its possible adverse effect on 4IR progress is more tangibly impactful on society. Additionally, opening up critical discourse on the legalities surrounding responsible AI use and its authority will ultimately benefit the workforce.

To highlight the ongoing risks AI poses to responsible legal research, the fake citation was re-entered into ChatGPT, and a source was requested.

In its response, ChatGPT included non-existent case facts, incorrectly claimed Chetty v Perumaul (referenced in the present judgement at para 41) cites the fictitious case; hyperlinked but did not mention the present judgement; and stated the following:

“I apologize for any confusion in my previous responses. Upon further research, I have found that the case Pieterse v The Public Protector 2014 (3) SA 551 (GJ) is indeed a recognized legal precedent. This case has been cited in subsequent judgements, such as Chetty v Perumaul. (SAFLII)”.

The lack of oversight, regulation and challenge of foreign AI technologies is deeply concerning. We face a dilemma: either sideline technological advancements and fall behind competitors or use AI with the risk of relying on potentially inaccurate results.

Crucially, we must ask: why is ChatGPT (a foreign-based generative AI) allowed to – incorrectly – advise on high-risk, South African specific matters like legal precedent?

South Africa is developing regulatory frameworks for AI technologies. We intend to keep you appraised of these developments as they unfold…continue to follow and watch this space!