Judiciary, chatbots and hallucinations: As Gujarat High Court draws a red line on AI, why human judgment is non-negotiable
Judiciary, chatbots and hallucinations: As Gujarat High Court draws a red line on AI, why human judgment is non-negotiable
In a notable move, the Gujarat High Court has effectively drawn a thick boundary around the use of artificial intelligence in judicial functioning. On 4th April, the court issued a policy that does not merely regulate AI but confines it with precision.
The main aspect of the policy is the restrictions that have been imposed by the court on the use of AI. According to the policy, AI shall never be used for judicial reasoning, order drafting, judgment preparation, bail or sentencing considerations, or any substantive judicial process. These restrictions are not limited to direct use. Even indirect influence of AI on findings of fact, findings of law, or operative orders has been barred by the court, even if it is later reviewed by a judge.
In a way, the court has clearly stated that adjudication cannot be assisted, influenced, or shaped by machines in any form. Furthermore, the policy also restricts the use of AI for sorting evidence, classifying documents, organising evidentiary material, assessing credibility, filtering relevance, or even summarising depositions and testimony.
Basically, any task that involves evaluation or categorisation of proof should remain exclusively within the human domain. This is significant because global trends are moving in the opposite direction, as AI is often used for such functions.
Furthermore, the policy has placed strict limits on the data that is shared with AI tools. According to the court, no confidential or sensitive information, including names of parties, witness details, case records, legal strategies, or personal data such as health, financial, biometric, or caste-related information, can be shared with public AI tools. Even if the High Court approves an enterprise AI tool, the use of such data is heavily restricted moving forward. The court has clearly shown concern about privacy protection and data leakage into external systems.
The court has also prohibited reliance on AI-generated citations or legal references without independent verification from authoritative sources. The court has clearly stated that judicial officers must go back to the source and verify all details before using the citations.
Courts have explicitly acknowledged that AI systems can generate plausible but non-existent judgments. In simple terms, no matter how developed or advanced an AI system is, it can always hallucinate and give details that do not exist in the real world. Therefore, no citations given by a chatbot can be trusted unless verified from a recognised legal database.
The policy has also established a strict accountability framework. Any output that is generated by AI, once signed or authenticated by a judge or court officer, becomes the sole responsibility of the signatory authority. The use of AI cannot be cited as a defence in cases of error, misconduct, or professional negligence. Furthermore, the court has given strict instructions to legal assistants and research staff to disclose any use of AI to the concerned judge so that there is transparency in the system.
Notably, the court has left a narrow path for the usage of AI tools. According to the policy document, AI may be used for legal research, including retrieval of judgments, identification of precedents, extraction of ratio decidendi, and preliminary analysis of statutory provisions. However, the policy restricts the usage to an assistive nature, and it must be verified against primary sources.
AI is also permitted for administrative and non-adjudicatory functions. These include automation of IT-related tasks, preparation of training materials, drafting of circulars and notices, and management of internal workflows. In addition, AI may be used to improve the language, structure, and clarity of draft documents, provided that the substantive legal reasoning remains entirely that of the judge.
Certain operational uses are also allowed, such as anonymised case allocation, scheduling, and statistical reporting, where decisions are based purely on objective metadata rather than subjective inputs.
In short, the court has not rejected the technology but disciplined its usage in the judicial system.
Supreme Court flags a growing ‘menace’
The policy did not appear out of thin air. The Supreme Court of India has already sounded the alarm. In a recent observation, a Bench of Justices Rajesh Bindal and Vijay Bishnoi described the growing use of AI-generated, non-existent judgments as a “menace” that is now rampant across courts, not only in India but on a global platform. The court’s concern arose from a case where submissions appeared to have been generated using AI tools such as ChatGPT, as they included references to judgments that did not exist.
The court noted that such AI-generated judicial documents waste the court’s time and undermine the integrity of proceedings. Furthermore, it reiterated a principle that is now emerging as judicial consensus, that AI may be used to assist res
In a notable move, the Gujarat High Court has effectively drawn a thick boundary around the use of artificial intelligence in judicial functioning. On 4th April, the court issued a policy that does not merely regulate AI but confines it with precision.
The main aspect of the policy is the restrictions that have been imposed by the court on the use of AI. According to the policy, AI shall never be used for judicial reasoning, order drafting, judgment preparation, bail or sentencing considerations, or any substantive judicial process. These restrictions are not limited to direct use. Even indirect influence of AI on findings of fact, findings of law, or operative orders has been barred by the court, even if it is later reviewed by a judge.
In a way, the court has clearly stated that adjudication cannot be assisted, influenced, or shaped by machines in any form. Furthermore, the policy also restricts the use of AI for sorting evidence, classifying documents, organising evidentiary material, assessing credibility, filtering relevance, or even summarising depositions and testimony.
Basically, any task that involves evaluation or categorisation of proof should remain exclusively within the human domain. This is significant because global trends are moving in the opposite direction, as AI is often used for such functions.
Furthermore, the policy has placed strict limits on the data that is shared with AI tools. According to the court, no confidential or sensitive information, including names of parties, witness details, case records, legal strategies, or personal data such as health, financial, biometric, or caste-related information, can be shared with public AI tools. Even if the High Court approves an enterprise AI tool, the use of such data is heavily restricted moving forward. The court has clearly shown concern about privacy protection and data leakage into external systems.
The court has also prohibited reliance on AI-generated citations or legal references without independent verification from authoritative sources. The court has clearly stated that judicial officers must go back to the source and verify all details before using the citations.
Courts have explicitly acknowledged that AI systems can generate plausible but non-existent judgments. In simple terms, no matter how developed or advanced an AI system is, it can always hallucinate and give details that do not exist in the real world. Therefore, no citations given by a chatbot can be trusted unless verified from a recognised legal database.
The policy has also established a strict accountability framework. Any output that is generated by AI, once signed or authenticated by a judge or court officer, becomes the sole responsibility of the signatory authority. The use of AI cannot be cited as a defence in cases of error, misconduct, or professional negligence. Furthermore, the court has given strict instructions to legal assistants and research staff to disclose any use of AI to the concerned judge so that there is transparency in the system.
Notably, the court has left a narrow path for the usage of AI tools. According to the policy document, AI may be used for legal research, including retrieval of judgments, identification of precedents, extraction of ratio decidendi, and preliminary analysis of statutory provisions. However, the policy restricts the usage to an assistive nature, and it must be verified against primary sources.
AI is also permitted for administrative and non-adjudicatory functions. These include automation of IT-related tasks, preparation of training materials, drafting of circulars and notices, and management of internal workflows. In addition, AI may be used to improve the language, structure, and clarity of draft documents, provided that the substantive legal reasoning remains entirely that of the judge.
Certain operational uses are also allowed, such as anonymised case allocation, scheduling, and statistical reporting, where decisions are based purely on objective metadata rather than subjective inputs.
In short, the court has not rejected the technology but disciplined its usage in the judicial system.
Supreme Court flags a growing ‘menace’
The policy did not appear out of thin air. The Supreme Court of India has already sounded the alarm. In a recent observation, a Bench of Justices Rajesh Bindal and Vijay Bishnoi described the growing use of AI-generated, non-existent judgments as a “menace” that is now rampant across courts, not only in India but on a global platform. The court’s concern arose from a case where submissions appeared to have been generated using AI tools such as ChatGPT, as they included references to judgments that did not exist.
The court noted that such AI-generated judicial documents waste the court’s time and undermine the integrity of proceedings. Furthermore, it reiterated a principle that is now emerging as judicial consensus, that AI may be used to assist research, but there is a corresponding and non-negotiable duty on judicial officers to verify each and every output.
In a similar observation in February, bench led by Chief Justice of India Surya Kant, along with Justices Joymalya Bagchi and BV Nagarathna, said it has been increasingly noticing pleadings that appear to have been drafted with the help of AI tools. During the hearing, the Chief Justice said the court had been informed that some lawyers have started depending on AI for drafting petitions. He made it clear that such practice, if not properly verified, can mislead the court.
When machines fabricate, courts pay the price
The consequences of AI-generated judicial documents have already been seen inside courtrooms. One of the earliest warnings came when Michael Cohen, a former lawyer who appeared as attorney of US President Donald Trump, admitted to passing along AI-generated case citations that turned out to be the result of chatbot hallucinations.
In India, a similar case came up before the Delhi High Court, where a dispute among homebuyers became a talking point. One of the petitions was found to be full of fabricated case laws and imaginary citations. The petition referred to specific paragraphs of a judgment that did not exist.
While the judgment only contained 27 paragraphs, the petition cited paragraphs 73 and 74. The inconsistencies led to the withdrawal of the petition. The court noted that several cited precedents did not exist at all, while others were misquoted.
The illusion of intelligence
There is a misconception that AI “knows” the law. In reality, it predicts text based on patterns. When those patterns are incomplete or ambiguous, it fills gaps by generating plausible-sounding information, even if it is incorrect or far away from reality. This is what is now widely referred to as hallucination.
In the legal world, plausibility is not enough. A single fabricated citation can alter the course of a case, mislead a court, and waste valuable judicial time. Unlike other professions, the cost of error in law is not merely technical. It has the potential to destroy someone’s life completely and irrecoverably.
The courts are now recognising this inherent limitation and prohibiting AI from entering the decision-making chain.
Why human intervention is non-negotiable
The judicial system operates on accountability. Every order passed, every observation made, and every finding included in the proceedings carry the authority of a human judge whose reasoning can be scrutinised, appealed, and held to constitutional standards. AI, no matter how sophisticated, has no responsibility.
The chatbot can simply say “sorry” and move forward without an iota of responsibility. Even the makers have no responsibility, as these chatbots are continuously learning and improving. Any mistake can be brushed off as an “oops moment”, no matter how costly it is in the real world.
Relying on AI even for a part of judicial reasoning dilutes accountability. In the worst-case scenario, it introduces a layer of opacity. An AI system produces an output based on opaque training data. The reasoning it provides, therefore, cannot be interrogated in the way human reasoning can.
This is why the Gujarat High Court’s approach is highly significant. By imposing personal liability on users and rejecting AI as a defence, the court has reinforced the principle that responsibility cannot be outsourced.
The same logic applies to lawyers. The duty to verify citations, test propositions, and ensure accuracy is fundamental to advocacy. AI may accelerate research, but it cannot replace professional judgment.
A tool, not a substitute
None of this suggests that artificial intelligence has no place in the legal ecosystem. Used correctly, it can assist in research, improve efficiency, and reduce administrative burden. It can help lawyers navigate vast databases of case law and support courts in managing caseloads. However, the distinction must remain clear. AI is a tool, not an authority. It can assist the mind, but it cannot replace it.
The road ahead
AI is continuously evolving. It is not going anywhere and will penetrate more into day-to-day lives. The question is not if it should be used, but how it will be controlled. It is essential to preserve the credibility of the judiciary. Hence, it is a must for judicial authorities to ensure that technology remains subordinate to human judgment. It will require not only policies but also discipline within the Bar and Bench.
The legitimacy of the judicial system is not just about efficiency, but it is about trust. The trust that people show even if a case drags on for years. Such trust cannot be built on machine-generated reasoning. It must continue to rest on the human mind, tested, accountable, and guided by law.