AI Can’t Replace Judgment: Bombay HC Quashes Tax Order

A Judicial Reality Check in the Age of AI

In a powerful reminder of the limits of artificial intelligence in governance, the Bombay High Court has overturned an income tax assessment that was based on AI-generated but non-existent legal precedents. The ruling, in the case of KMG Wires Pvt. Ltd., exposes the growing danger of depending on unverified AI tools for legal and administrative decisions.

The order, issued by the National Faceless Assessment Centre (NFAC), had attempted to raise the company’s declared income from ₹3.09 crore to ₹27.91 crore by citing fabricated case laws that appeared authentic but had no legal standing. Calling it a violation of natural justice, the bench of Justices B.P. Colabawalla and Amit B. Jamsandekar quashed the order and directed a fresh hearing—sending a clear signal that AI cannot replace human diligence and legal reasoning.

The Central Error: When AI ‘Hallucinates’ Law

At the heart of the controversy lies a phenomenon now known as “AI hallucination”where artificial intelligence systems generate highly convincing but completely false content. In this case, the assessing officer relied on AI-generated case citations that sounded legitimate but could not be found in any recognized legal database.

Without verifying their authenticity, these fictitious references were incorporated into a formal tax order—skewing the outcome and penalizing the taxpayer unfairly. The High Court noted that such reliance on AI without validation breaches fundamental legal principles of transparency, accountability, and the right to a fair hearing.

Legal observers argue that this case is not an isolated glitch but a warning about how quickly automation can undermine due process if not accompanied by human oversight. Similar incidents abroad—most notably in U.S. courts—have revealed how AI can invent citations, misleading even experienced professionals.

Broader Implications: The Double-Edged Sword of AI in Law

Artificial intelligence has rapidly become integral to India’s “faceless” governance model—from income tax assessments to e-court research and legal drafting. Its ability to analyze vast datasets and identify precedents has revolutionized efficiency. However, this case highlights the dark underside of unmonitored automation: errors masquerading as expertise.

AI may accelerate legal processes, but it lacks contextual understanding, ethical reasoning, and accountability—qualities central to justice delivery. Experts now stress that human verification must remain the final gatekeeper in every AI-assisted legal or administrative function.

Globally, courts and bar associations are revising standards for responsible AI use, requiring manual verification of AI-generated citations, disclosure of tool usage, and clear authorship accountability. India, too, faces the urgent task of crafting regulatory frameworks that blend innovation with caution.

Preventive Measures: Building Guardrails for Responsible AI Use

The Bombay High Court’s verdict effectively serves as a policy blueprint for all institutions integrating AI. Experts recommend several safeguards to prevent similar failures:

·       Mandatory Human Verification: Every AI-generated legal reference must be verified by qualified professionals before use in official documents.

·       Regulatory Oversight: The Supreme Court and relevant ministries should issue formal protocols for AI-assisted research, explicitly prohibiting the use of unverifiable or “black box” decisions.

·       Tool Certification and Training: Legal departments must maintain lists of approved AI tools, conduct annual accuracy audits, and train officers to recognize AI limitations.

·       Transparency and Accountability: Every order using AI inputs should disclose the source, model, and version used. Human officers—not the software—must remain accountable for decisions.

·       Data Integrity and Privacy: AI tools must comply with India’s Digital Personal Data Protection (DPDP) norms, ensuring that training datasets and algorithmic changes are auditable and secure.

·       Institutional Oversight: Interdisciplinary committees should monitor AI applications in public administration to quickly identify and correct misuse or systemic bias.

Technology as a Tool, not a Decision-Maker

The Bombay High Court’s intervention serves as a landmark reminder that while AI can empower legal systems, human judgment remains irreplaceable. Efficiency cannot come at the cost of fairness, and automation must always serve—never supersede—the rule of law.

As India accelerates toward a digital-first justice system, the challenge lies in harnessing AI’s potential responsibly, with transparency, accountability, and ethical restraint. The message from the judiciary is unambiguous: trust technology, but verify—always.

(With agency inputs)

Leave a Reply

Your email address will not be published. Required fields are marked *