The judge and the bot

Artificial intelligence has made its way into the Indian legal system. Should we worry?

Good morning! In March this year, the Punjab and Haryana High Court employed ChatGPT to decide a bail plea. But should AI be used for decision-making? For one, AI is solely based on algorithms. It also lacks empathy and ethical reasoning, factors that go into real-life decision-making. Add to it that India lacks a data privacy law, and we have a recipe for disaster. What are the potential effects AI could have on the legal system? We delve in.

The Signal is now on Telegram! We've launched a group — The Signal Forum — where we share what we’re reading and listening through the day. Join us to be a part of the conversation!

If you enjoy reading us, why not give us a follow at @thesignaldotco on Twitter and Instagram.

Time stands still in India’s corridors of justice. The nearly 50 million pending cases across taluka courts, district courts, High Courts, and the Supreme Court (SC) are testament to this. But on March 27 this year, the Punjab and Haryana High Court took a leap into the future.

The case was Jaswinder Singh @ Jassi vs. State of Punjab and another. The hearing was a bail plea by “Jassi”, the defendant accused of brutal assault resulting in death. The nature of the crime and Jassi’s history of two murder charges meant that bail would likely be denied. It was. Justice Anoop Chitkara reasoned that the parameters of bail change when there’s an element of cruelty.

Nothing about this pronouncement is dramatic. Bail hearings seldom are. But Chitkara then turned to the world’s most famous chatbot and typed:

What is the jurisprudence on bail when the assailants assaulted with cruelty?

ChatGPT’s response (2023 LiveLaw (PH) 48, pdf) made its way in the judge’s post-reasoning declaration. Chitkara stressed that this reference is “only intended to present a broader picture on bail jurisprudence, where cruelty is a factor”. But everyone went to town with it. After all, it was the first time an Indian court had used an artificial intelligence (AI) bot during a hearing.

On the opposite flank of the country, the Chief Justice of the Orissa High Court praised Chitkara’s use of ChatGPT. The drudgery of “digesting documents of 15,000 pages” isn’t lost even on DY Chandrachud, the Chief Justice of India, who says AI is rife with possibilities. Chandrachud chairs the SC’s e-Committee, which oversees India’s e-Courts project. Conceived in 2005, e-Courts’ mission is to make justice delivery transparent, accessible, and productive for all stakeholders. Digitising the lumbering leviathan that’s the Indian legal system is a critical part of this project. The 2022-2023 Union Budget earmarked ₹7,000 crore (~$854 million) for its third phase, whose vision document mentions AI. And yes, the SC has an Artificial Intelligence Committee.

To clarify, even the most enthusiastic legal minds are categorical that technology cannot be judge and jury. For good reason. The most accurate AI tools may recall every law, but their logic is inductive. Human logic is deductive. Dispensing justice—or just enforcing the law—requires ethical reasoning, reflection, even empathy.

The current crop of AI chatbots sometimes “hallucinate” or provide convincing, but rubbish outputs because they’ve absorbed statistical patterns from biased data. The result is fabricated medical research, fake news, and even made-up laws. COMPAS, used by American courts to conduct risk assessment on defendants, was literally racist. Poland’s Ministry of Justice was directed to make algorithms public after its automated system disproportionately assigned cases to certain judges.

Little wonder that a recent study suggests India could be hurtling towards “organised irresponsibility”.

On May 1, research collective Digital Futures Lab (DFL) released an 83-page report on the implications of automation, as envisioned by the e-Courts project. Smart Automation and Artificial Intelligence in India's Judicial System: A Case of Organised Irresponsibility? (pdf) warns about “futures that are harmful”. Why? Bias, lack of accountability, and absent policy.

Garbage in, garbage out

ChatGPT, operated by Microsoft-backed OpenAI, runs on a large language model (LLM) called GPT-4. GPT-4 is reportedly trained on one trillion parameters, or sets of information. The quantity and quality of parameters affect the responses you get as a user. Earlier versions of GPT were fed the internet, which as we know is full of unsubstantiated, prejudiced claims and authoritative sources alike. The variance explains why chatbots sometimes spout incomplete or inaccurate information.

This is fundamental to understanding how biased datasets can lead to potential injustice. According to the DFL report, “Models based on biased judgements would… solidify bad precedents”.

This isn’t about prompting ChatGPT during a hearing as much as it is about automating processes that will “learn” from First Information Report (FIR) data, for example. Police records are peppered with characterisations against marginalised groups. On top of that, the Indian judiciary itself has a male and upper caste skew, which in turn shapes decision-making.

Algorithms reinforce patterns, however problematic. The example of case management software COMPAS—developed in a country with a disproportionate number of incarcerated Black people—proves this.

Then there’s the issue of quality. India has a federal legal system. The SC, even as the highest court of the land, can’t impose processes on lower courts. Because of this, there are variances in what each state even considers to be a case. There’s no standardisation in how clerks record complaints, petitions, and what have you.

The outcome is unorganised data collection and management. Lower courts record information in templates different from higher courts.

“The lack of standardisation can lead to errors. As a data scientist, you’d have to apply certain cleaning techniques to remove the ‘noise’, and convert .pdf, .json, etc. into machine-readable language,” explains Ashutosh Modi. Modi is an assistant professor with the computer science and engineering department at IIT Kanpur and part of the LegalEval team that codified some Indian cases.

Throw in India’s linguistic diversity, and you have a nightmare scenario for experts who need a uniform format (leave alone language) to train AI models. The country has automated legal-tech translation tools such as the SC’s SUVAS (short for Supreme Court Vidhik Anuvaad Software), EkStep Foundation’s Project Anuvaad, and Agami’s Jugalbandi, but no benchmarks for Indian legal languages. And why would it? India’s IT minister Ashwini Vaishnaw literally said the Centre has no plans to regulate AI yet.

Tl;dr: any guideline on how courts should or shouldn’t use automated tools (which in themselves are fed on skewed data) will be futile without a national policy. As Justice Madan Lokur, former judge of the Supreme Court of India and current judge with the Supreme Court of Fiji, tells The Intersection:

AI cannot be left unregulated, otherwise we will have uncanalised decisions that can play havoc with the lives of people. For example, how do you believe one witness and disbelieve another?”

If that seems like a stretch, consider how judges deliberate. They use deductive reasoning in a system that upholds rationality. In other words, objective information influences a judge’s subjective inference. Imagine a case where a briefing counsellor, whose job it is to prepare briefs to strengthen a client’s case, references biased or inaccurate data and legal precedents. That data could trigger undesirable outcomes.

“The SC e-Committee says it doesn’t want algorithms to play a role in adjudication [decision-making]. But adjudication is a cycle of processes that include research, reading case commentary, etc. Meaning you’re indirectly nudged in a particular direction,” says Ameen Jauhar, who leads the Centre for Applied Law and Technology Research at the Vidhi Centre for Legal Policy.

“I’m not saying humans don’t make mistakes. But we should acknowledge the compounding effect automation bias [the human tendency to favour automated systems] and AI hallucinations can have on justice,” he adds.

Private versus public

Who is accountable for judicial algorithms?

The clerks who upload error-riddled information? The machine learning designer who didn’t do risk assessment? Or an opaque judiciary that doesn’t share information with the public*?

If you guessed “nobody”, you’re right. Because India has no data privacy law, leave alone an AI policy. The two aren’t mutually exclusive; the data you feed or don’t feed to LLMs is contingent on privacy rights (or the lack of). India is still splitting hairs over the Draft Digital Personal Data Protection Bill, 2022. Acts like the Protection of Children from Sexual Offences dictate that names be redacted. Yet, lower court records continue to disclose identities. As the DFL report notes, named entity extraction tools have been used to identify and profile women who’ve filed Section 498A IPC (domestic abuse) cases.

This matters because private companies can bid for e-Courts projects. As an example, Delhi-based startup Mancorp Innovation Labs developed a chatbot named Samwad for the Jharkhand High Court. It also created automation software for the Patna High Court, and a database called SUPACE (Supreme Court Portal for Assistance in Court's Efficiency). The DFL report found that Mancorp’s deal with the Jharkhand High Court was retrospectively approved. So you have a for-profit company that accessed sensitive legal data, coupled with unknowns over how it was awarded the tender in the first place.

If you’re a private company, your algorithms are trade secrets. You aren’t required to reveal details about your training data. Your source code is shielded from public auditing and impact assessment, even if it’s led to serious repercussions.

Issues of privacy will arise, for example in cases of sexual abuse, matrimonial disputes, official secrets and so on. Issues of data mining and then data theft will also arise.”

—Justice Madan Lokur, Former judge of the Supreme Court of India

Not that public institutions are transparent. The SC e-Committee doesn’t release minutes of its meetings, and even lawyers don’t know how it functions. Two legal experts, who wrote that the e-Courts project has never been audited, filed an RTI to access the final proposal for Phase III (not to be confused with the vision document). Their application was rejected.

“We have no idea how SUPACE and SUVAS are used. SUVAS was supposed to be beta tested in several courts, for a period of six months, to improve accuracy. But then Covid happened. And there's been no update since. The last publicly-available information is from 2019, and even that comes from blog posts,” a lawyer tells The Intersection on condition of anonymity.

“It’s ridiculous that we have to grapple for such information. The Phase III vision document endorses data disclosures and making procurement details public. But at the end of the day, those are just recommendations,” they add.

Since Mancorp Innovation Labs developed SUPACE, which is essentially an AI-driven research tool for SC judges, we contacted co-founder Manthan Trivedi to discuss the issue of algorithmic opacity…

…and found that Mancorp no longer works in this space.

Over a phone call, Trivedi says his company moved on to “other directions for commercial engagement” because it ran out of funds. Investors didn’t want to touch a public service contractor with a bargepole. This was before ChatGPT exploded onto the scene, so there was no reference point for how AI can be deployed in virtually every profession.

Alright, what about the automated tools it developed for the judiciary? Who’s responsible for their management?

Trivedi says Mancorp did the R&D and handed over its products to the Jharkhand High Court and Patna High Court in 2018, and to the SC in 2021. He claims there were no annual maintenance contracts.

“Since they’ve taken over, it’s up to them to disclose the code or hire third party auditors,” he concludes.

“In the absence of auditors and regulators, I think it’s the AI designer’s responsibility to develop a risk assessment framework. That is to warn users about potential risks and fallouts, reliability errors, and possibilities of discrimination.

—Shubhashis Banerjee, professor of computer science at Ashoka University and professor of computer science and engineering, IIT Delhi

Once again, we come full circle to the question: who should be held accountable for judicial algorithms?

The answer is straightforward for Sachin Malhan, co-founder of the non-profit legal tech organisation Agami India. Agami operates OpenNyAI, a mission to develop free and open-source software such as Jugalbandi (a chatbot offering information about government welfare schemes in regional languages) and the AI assisted Judgement Explorer.

Because Agami’s work is open source and geared to be interoperable with government platforms, anyone can scrutinise it, provide feedback, and build on it. So in effect, the responsibility would lie with whoever is applying these open source tools for specific use cases.

“A collaborative process between technologists, lawyers, and policy experts is key to the output. [AI] Models can also be constrained to high-fidelity sources rather than large, glitchy databases,” Malhan says. “But to begin with, AI should be used in low risk situations .”

That means no ChatGPT in a criminal case.

ICYMI

Poll time: In nations with democratic systems, however flawed they might be, forecasting the end of long-ruling autocrats by vote is an exercise fraught with wishful misreadings. There are too many ifs and buts, and every twitch of democratic institutions kindles hope, often false. It happened ahead of China’s 2022 Party Congress when analysts spotted chinks in Xi Jinping’s political shield. Xi emerged from the Congress stronger than ever, all doubts crushed by his forceful determination to consolidate power. A similar performance is expected from Recep Tayyip Erdogan in Turkey. This Foreign Policy analysis suggests that Erdogan’s 20-year rule might end on Sunday. With good reason. The opposition is fairly united behind a single challenger, CHP (Republican People’s Party) leader Kemal Kilicdaroglu, and the nationalist front is fragmented. The economy is in crisis with record inflation and the Turkish lira losing value. Erdogan has tried to balance it by steeply hiking wages and cutting interest rates, which has seen domestic consumption booming. And his approval ratings are at levels Joe Biden will envy.

Cut and thrust: The west wants to choke all avenues for Russia to generate income in every possible way. The latest target is diamonds, largely mined from the frozen swathes of Siberia. The Group of seven developed countries are in agreement that diamonds originating in Russia have to be banned from reaching western markets, the Financial Times says. Key to that is evolving an effective mechanism to track and trace the stones. That in itself will be tough considering the secrecy surrounding diamond movement. Their value makes it somewhat necessary too from a safety point of view. There is another issue as well. A large share of Russian diamonds are processed in India and stopping that trade would mean extensive job losses in a country struggling to employ millions joining the workforce every year. Previously, stones cut and polished in India were classified as of Indian origin. The traceability initiative could hit Indian processors hard. They are trying to push the inevitable by asking to start with large stones before applying trace-and-track norms to smaller ones, the bread and butter of India’s diamond industry.

Here comes the sun police: Not everyone can fool Warren Buffett and the US taxpayers in one go. But Jeff Carpoff did. This story in The Atlantic traces the rise and fall of his company DC Solar and its ‘invention’ - essentially solar panels on wheels. Carpoff used tall tales about a tough childhood to worm his way into the inner circles of big name investors and celebrities who backed his sham enterprise. But his biggest source of profits became the US government. He used a lawyer’s help to draw up a Ponzi scheme, selling his solar generators for ridiculous prices to companies that could refund 30% of the cost with government tax credits for renewable energy. DC Solar would sell generators for 30% of a highly inflated cost - all to maximise their clients’ tax credits - and promised to service a loan for the other 70% on their behalf. To make these payments, it sold more and more generators (that eventually never even got made) and invented leases to misrepresent this money from new generator sales as money from actual lessees. As the sentencing judge for Carpoff put it, DC Solar was “selling air”. By the end of it, the company had ripped off the US government of nearly $1 billion.

Chickens come Holmes to roost: Elizabeth Holmes is back. But it’s Liz now, as Holmes tries to scrub some of the legacy of her crimes running blood-testing fraud startup Theranos. Holmes is about to begin an over 11 year prison sentence for fraud. But this profile in The New York Times paints a sympathetic portrait of a contrite woman. Yet, she laughs when her husband imitates the deep voice she used while lying about Theranos to investors, business partners, employees, and most of all—doctors and patients. ‘Liz Holmes’ says ‘Elizabeth’ is a character she created to be taken more seriously in a deeply sexist world. She also insists she’s turned over a new, authentic leaf as a family woman, a mother of two trying to shake off all the “negative” perception of her. But the most infuriating bit is that Holmes insists if she’d only had more time, she would have achieved Theranos’ vision of creating revolutionary, (medically impossible) blood testing technology.

Sunset hours?: South Korea chaebols, which translates to wealth clan in Korean, include the likes of LG, Hyundai and Samsung. These family-run business conglomerates dominate the country's economic landscape. At their prime, chaebols were seen as the solution to put the country's development on the fast lane. While the experiment worked, public perception about these businesses has been far from positive in recent times. For instance, Samsung heir Lee Jae-yong won a presidential pardon for crimes in 2022, much like other high-profile business tycoons. The government is doing its bit to contain the hegemony. But the systematic rot is showing. This article in 360info makes a case.

Weather warning: Australia's 2019-2020 bushfire, torched more than 46 million acres of land. But, it may have had a role to play in triggering the rare three-year La Niña late 2020 to early 2023. With it, came intensified droughts and also increased the chances of hurricanes. Turns out, the smoke from the bushfires acted as cloud condensation nuclei, which made the clouds brighter and reflected more sunlight back into space. Climate scientists at the US National Center for Atmospheric Research believe this isn't the first instance, and certainly won't be the last. La Niña would have been a regular occurrence, but without the disaster, it would have been weaker. As the world increasingly warms up, it will cause more bushfires, and possibly, more such frequent events, that will drift across the Pacific. This story in The Wired highlights how wildfires may have a greater influence on climate change than earlier thought.

Join the conversation

or to participate.