1. The real-world expansion of AI in the legal sector
The integration of artificial intelligence into legal practice is no longer a future prospect, but an established reality.
A telling example is the recent initiative by the European Patent Office (EPO), which has decided to expand the use of AI tools for the preparation of transcripts in oral proceedings. According to official information, following a pilot programme involving approximately 150 cases in 2025, the system enables:
– The generation of automatic transcripts from recordings
– Assistance in drafting minutes
However, the model adopted by the EPO retains human intervention in the final review and validation of the document.
These tools promise efficiency, cost reduction and faster access to information.
However, alongside these advantages, an increasingly significant risk is emerging: the ‘hallucinations’ of artificial intelligence.
2. The problem of “hallucinations”
“Hallucinations” occur when an AI system generates information that appears plausible but is incorrect, inaccurate or completely fabricated.
In the legal sphere, this may involve:
– Citations of non-existent case law
– References to incorrect legal provisions
– Erroneous interpretations of applicable law
– Inaccurate legal translations
Unlike human error, these responses can appear highly convincing, increasing the risk of relying on them without verification.
AI “hallucinations” have begun to produce real legal consequences.
(1) United States: non-existent precedents and judicial sanctions
One of the most emblematic cases is Mata vs. Avianca (2023), where lawyers submitted pleadings containing non-existent case law generated by AI. The court not only rejected the arguments but also imposed financial penalties on the lawyers.
The use of AI does not exempt lawyers from their duty to verify.
(2) Australia: Deloitte and ‘hallucinations’ in legal reports
In 2025, Deloitte produced a report for the Australian government that contained:
– non-existent academic citations
– fabricated judicial citations
The firm had to refund part of its fees after errors generated by AI were confirmed.
(3) Australia: disciplinary sanction against a lawyer
In another significant case, a lawyer was sanctioned after presenting AI-generated case law without verifying its existence, being unable to identify the cases before the court.
(4) Case linked to Japan: Nippon Life v. OpenAI (2026)
In a recent case of particular relevance to Japanese operators, a US subsidiary of Nippon Life Insurance Company sued OpenAI in the US.
The lawsuit alleges that ChatGPT provided legal advice to a claimant and encouraged the reopening of a previously closed case. It had also generated multiple unfounded procedural documents.
This is said to have resulted in significant legal costs for the insurer.
Consequently, the US subsidiary of Nippon Life Insurance Company is claiming compensation for the damages caused by this conduct, amounting to approximately $300,000, as well as seeking $10 million in punitive damages.
This last case raises a key question: Can an AI system be guilty of unauthorised practice of law?
3. Japan: Article 72 of the Lawyers Act and legal AI
In Japan, the debate on the use of artificial intelligence in the legal sphere has recently taken on particular significance. On 9 January 2026, the Ministry of Justice published a document entitled “Article 72 of the Lawyers Act and AI-based legaltech services”, which analyses the relationship between AI tools and the prohibition on the unauthorised practice of law.
Article 72 of the Japanese Lawyers Act prohibits unqualified individuals or entities from providing legal services in return for remuneration. In this context, the document does not take a prohibitive stance towards AI, but rather recognises its potential to improve efficiency in tasks such as contract review or the management of legal information.
However, it also highlights the need to clarify the criteria for distinguishing between the mere provision of information and legal advice proper. In particular, it highlights elements such as the existence of a remunerated relationship, the presence of a specific legal conflict or issue, the degree of customisation of the analysis, and the involvement of a lawyer in supervising the service.
Furthermore, the document notes that a lack of regulatory clarity may have a deterrent effect on the development of innovative services, highlighting the need to strengthen legal predictability in this area.
4 Conclusion: the lawyer as guarantor in the age of AI
AI not only brings efficiency, but also a new type of structural legal risk.
The example of the EPO demonstrates that the key lies not in avoiding AI, but in integrating it correctly with human supervision.
The role of the lawyer does not disappear, but is reinforced as a guarantor that what is legally correct is not replaced by what is merely plausible.
Satoshi Minami
Vilá Abogados
For more information, please contact:
2nd of April 2026