Legal Ethics, Education and AI: Where do we generate the line?

Kyle Janse

On the 30th of November 2022, a free-to-use generative artificial intelligence chatbot, namely ChatGPT, was launched and made available worldwide. The chatbot, seemingly all-knowing, took the world by storm, and shortly thereafter, the variety of its uses was explored and perpetuated in society. However, the inherent dangers of the bot were similarly exposed, with many noting that it sometimes provides biased responses.

Artificial Intelligence (AI) is once again the centre of controversy after a recent judgment in the Pietermaritzburg High Court, where Judge Elsja-Marie Bezuidenhout issued a scathing ruling against a law firm that relied on fictitious case citations generated by an artificial intelligence chatbot which I opine, raises serious concerns regarding education.

In the matter of Mavundla v MEC: Department of Co-Operative Govenrment and Traditional Affairs KwaZulu-Natal and Others, erstwhile Mayor of uMvoti Local Municipality, Philani Mavundla’s counsel (the Appellant) relied on nine cases to appeal a decision made against him in a prior case, where he sought to interdict a special meeting held in order to discuss his removal as mayor. During the writing of the judgment, Judge Bezuidenhout discovered that several citations were incorrect or could not be found in any of South Africa’s Law Reports. Of the nine cases relied on by Mavundla’s counsel, the judge and law researchers employed by the Pietermaritzburg High Court could find only two.

The judge later discovered that the submissions before the court were drafted by a candidate attorney who, when questioned, stated that she obtained the cases from her Unisa portal. The candidate was also specifically asked whether she used AI but denied using it. However, it was clear to all parties concerned that AI was used in the drafting of the submissions and/or legal research conducted and the firm was ordered to pay costs as well as being referred to the Legal Practice Counsel for further action to be taken. 

Similarly, in 2023, a US attorney was fined for using ChatGPT for legal research and submitting fictitious case citations in the court file. With the growing use of generative AI in professional spaces,  it is time to consider the limitations and advantages afforded by AI tools for employees and employers.

There can be no denying that Large Language Models (LLMs) such as ChatGPT are great writers and can demonstrate competence in a wide array of topics in differing professional environments. Whether it be the definition of res judicata in law, or indeed, an explanation of string theory, it can confidently handle most questions thrown at it.

LLMs’ answers to most questions come across as those of a human being with a level of competence in the subject being discussed with them, and all too often, its users believe it does. This is not the case. LLMs are trained on large datasets and use those datasets to answer questions posed by individuals through the use of probability to determine the next word in a text sequence.

So while they are very good at giving generic and thorough answers, they lack the technical competency to apply their ‘knowledge’ in context or cater for nuance. This makes them ill-suited for highly specialized professions where general knowledge of a topic is not enough. A common saying in law is that for every rule, there exists an exception. These exceptions often require deep research and thoroughly engaging with both law and law as interpreted by judicial officers, or what is commonly known as precedent.

Even if LLMs are able to draw out the precedent from cases or convey complex topics in a simplified manner, they do this by relying on an interpretation by an author in journals, articles, and text-based writings on which they were trained. A trained professional knows that secondary data may serve them well to gain an understanding of a topic, however, they must go the extra mile to review the interpretation by the author by engaging with the source materials.

An additional concern raised by the case is the impact of generative AI on education. A candidate brazen enough to rely on AI in submitting case citations generated by AI to a court of law calls into question how educational institutions adopt, embrace, and educate on new technologies such as LLMs.

Referencing is perhaps the most tedious exercise for most learners. Its purpose, however, cannot be understated. It shows the educator that the learner engaged with the literature developed an understanding, and synthesized the data in order to form an opinion or apply it in context. Utilizing LLMs impacts this manner of education by outsourcing the researching, reading, and understanding of topics such that a learner may copy and paste the generated text, with its accompanying references.

Scholars and professionals were warned against the dangers of using the platform for rigorous research and/or formal submissions. Failure by an educator to vet references and engage critically with the submissions of learners may lead a learner to develop confidence in the LLM and consequently continuously rely on it throughout their education without developing critical thinking skills

This ultimately leads me to a preliminary conclusion that AI is a novel and useful tool; however, its application for professional work and academic environments should be approached with extreme caution.

Kyle Janse is a researcher at the Centre for Analytics and Behavioural Change

Leave a Comment

Your email address will not be published. Required fields are marked *