“Your Lordship, I used an AI tool…” — and the Bench responded in the classic judicial way: not with alarm, but with careful scrutiny.

The courts have opened the door to AI. The real question is whether the profession will walk through it.

Artificial intelligence is already entering India’s judicial ecosystem through translation, transcription, research assistance, and court administration. Courts have signalled cautious acceptance. The real question now is not whether AI belongs in law, but whether the legal profession is ready to adapt and use it responsibly.

There’s a new genre of courtroom confession that didn’t exist a few years ago.

Not “I forgot to annex page 412.
Not “My Senior is on his legs in another courtroom."
But the soft, careful, slightly guilty:

“Your Lordship, I used AI.”

And you can almost see the Court do a quick mental checklist in real time:

  1. Is it accurate and independently verified?

  2. And was the record handled with the confidentiality it deserves?

That, in a sentence, is the judiciary’s actual posture on AI right now: not a ban, not a blind embrace; a careful and measured approach.

“Use it, but don’t you dare outsource responsibility.”

What judges are really allowing lawyers to do with AI

Courts aren’t trying to stop lawyers from using AI as a tool. They’re trying to stop lawyers from using AI as a source of truth.

The Delhi High Court said the quiet part out loud in Christian Louboutin v. Shoe Boutique; a ChatGPT response “cannot be the basis of adjudication of legal or factual issues.

That one line has more or less set the tone. AI can be used for assistive work, certainly. But if an argument relies on it the way it relies on SCC Online or a reported judgment, it is essentially presenting speculation in the format of authority.

And then the Supreme Court’s own White Paper on AI and Judiciary (Nov 2025) does something even more useful: it spells out the implied courtroom rules in plain compliance language.

For lawyers, the White Paper’s “suggestive guidelines” are essentially:

  • Don’t delegate legal reasoning or case strategy to AI.

  • Protect confidentiality and privilege; don’t feed sensitive material into tools unless safeguards are adequate.

  • Be ready to explain, if asked by the court, whether AI was used and what verification you did.

  • If an AI-derived error made it into the record, correct it promptly;  especially fake authorities or misquotes.

  • Accountability doesn’t move. It stays with the human lawyer.

In other words: AI can be your intern. It cannot be your citation.

If you want a simple mental model: courts are increasingly treating AI use by lawyers the way they treat anything that enters the record,  with the same expectation of traceability, verification, and personal responsibility.

Judges aren’t just allowing AI. They’re using it too (carefully, and on grunt work first)

The fun irony is that while courts are warning lawyers against over reliance, judges themselves have already started using AI, but in a very judiciary-shaped way:

low-risk tasks, high accountability, record-bound use, and lots of guardrails.

Madras High Court (Jan 2026): a quiet but important moment for Superlaw

For us at Superlaw, one moment this year felt particularly meaningful. In January 2026, the Madras High Court allowed the use of an AI-assisted system, Superlaw Courts, in an arbitration matter with the consent of the parties. For a team that has spent years thinking about how technology should responsibly fit inside the legal system, this felt less like a headline and more like a validation of a very specific philosophy.

The Court permitted Superlaw Courts to assist with something every litigator understands all too well: navigating dense records. The system was used only to locate, organise, and surface information within the documents already on record. Nothing more, nothing less.

And the boundaries the Court set were very clear, and frankly, they are exactly the boundaries we designed Superlaw around. The tool would not consult external sources, not draw conclusions, not assess credibility, not interpret intent, and not express legal opinions. Its role was simply to present what the record already contains, in a structured and searchable way.

If anything, the Madras High Court’s approach reinforces a principle we strongly believe in: AI can assist the court in understanding the record, but the thinking, reasoning, and decision-making must remain entirely human.

Or put simply: the judiciary is willing to accept AI as a record assistant, not as a judicial mind. And that is exactly the role Superlaw was built to play.

The Supreme Court is building “judge-assist” systems, not “judge-replace” systems

The White Paper describes SUPACE as an AI-driven platform meant to help judges manage complex caseloads by analyzing case records, surfacing relevant material and precedents, generating summaries, and organizing documents so judges spend less time on routine review.

Even the framing matters: it’s about reducing administrative and research load, not drafting outcomes.

And if one needs proof that the institution is focused on implementation rather than mere principle, the Supreme Court’s own “Important Links” page lists a notice relating to the deployment of AI tools for the transcription of court proceedings and oral arguments.

Kerala High Court “Yes to AI transcription, no to AI decision-making”

The High Court of Kerala offers one of the clearest examples of responsible AI adoption within the judiciary.

On the usage side, the Court introduced Adalat.AI, an artificial-intelligence based speech-to-text transcription system, as the primary tool for recording witness depositions in trial courts across the state, effective 1 November 2025. The transcripts generated are uploaded to the District Court Management System (DCMS) in accordance with a Standard Operating Procedure (SOP) issued by the Court.

On the governance side, the Court’s policy on the use of Artificial Intelligence (AI) in the District Judiciary cautions against using general cloud-based Generative Artificial Intelligence (GenAI) tools for confidential case material. It also mandates verification of AI outputs, audit trails, and training, while drawing a clear boundary: AI tools shall not be used to arrive at findings, grant reliefs, or draft orders or judgments.

Individual judges have openly used ChatGPT but often as “extra research,” not a legal authority

Two famous examples show how this plays out in orders:

Punjab & Haryana High Court (2023) - Justice Anoop Chitkara used ChatGPT in a bail matter to get a broader picture of bail jurisprudence (and the reporting makes clear it wasn’t treated as binding authority).

Manipur High Court (2024) - in Md. Zakir Hussain v. State of Manipur, the Court recorded that since the State’s counter didn’t explain procedure, it was “compelled to do extra research through Google and ChatGTP 3.5” and then set out information it collected.

So yes,  judges are using AI. But usually in a way that still clarifies that human judgment stays in the judge’s chair.

The Global Turn Toward AI in Justice Systems

Across the world, courts are gradually integrating AI into the administrative and information-heavy layers of judicial work. From AI-assisted research tools in U.S. courts to document analysis and case-management systems being tested in Europe and Asia, the focus is largely on improving efficiency, accessibility, and record management rather than replacing judicial reasoning. International guidance is also beginning to shape this shift. For example, UNESCO has issued guidelines on the responsible use of AI in courts and tribunals, emphasising transparency, human oversight, and safeguards around fairness and accountability. The broader legal ecosystem is also recognising that AI is no longer experimental; the American Bar Association has observed that it is rapidly becoming core infrastructure for legal practice and justice systems worldwide.

The momentum behind this shift is also reflected in investment trends: legal-tech and AI tools for the legal system have attracted significant global funding in recent years, with the sector witnessing a sharp surge in capital and investor interest as courts and legal institutions modernise their digital infrastructure.

Why the judiciary is being strict (and why it should)

Because AI has a uniquely dangerous talent: it can be wrong in a way that looks courtroom-correct.

The Supreme Court White Paper defines hallucination as output that is fabricated or incorrect while appearing coherent and persuasive, because models predict text rather than verify facts.

And now we have a steady stream of “phantom precedents” and “fake citations” episodes:

  • Andhra Pradesh High Court dealt with non-existent AI-generated citations in a trial court order and warned that AI should be limited to information gathering, not judgment drafting.

  • Gujarat High Court flagged a worrying trend of quasi-judicial orders relying on AI-generated citations and called for parameters/guidelines.

  • The Supreme Court has recently warned that orders based on AI-generated, non-existent judgments will amount to misconduct (reported March 2, 2026).

This is why judges keep returning to the same idea: the integrity of adjudication depends on verifiability. If you can’t verify it, it doesn’t belong in the reasoning chain.

The Madras High Court story is a sign of where things are heading: record-bound AI with explicit boundaries. That’s the lane Superlaw is built for. Superlaw’s core idea is converting the full record into Case Memory so context doesn’t fragment across pleadings, annexures, orders, transcripts and exhibits and then letting you ask questions or draft with outputs grounded in the record and quickly verifiable. The unit of work is the case, not one document, because litigation is a connected system, not a PDF collection.

The real question now

AI is already part of the judicial ecosystem. Courts are experimenting with it, regulating it, and making it clear that it belongs in assistive work around the record, not in the judge’s chair.

At Superlaw, that boundary isn’t a limitation; it’s the design brief. We built Superlaw to stay firmly inside the case record, organising sprawling files, connecting pleadings and annexures, and helping lawyers and judges find what matters faster, while always pointing back to the underlying source.

In other words, Superlaw doesn’t try to think like a lawyer or decide like a judge. It simply does what every good junior in chambers does best: know the record inside out and show you exactly where to look.So the real question for the profession is this: are we ready to upskill ourselves and use AI responsibly within the standards the judiciary is setting?

Continue reading