New Delhi, Nov 10 (UNI) Chief Justice of India BR Gavai today observed that the judiciary is well aware of the misuse of Artificial Intelligence (AI), including Generative AI (GenAI), and the potential risks posed by manipulated digital content.
The remarks came while the Supreme Court was hearing a public interest litigation (PIL) seeking directions to the union government to frame a comprehensive policy or guidelines for regulating the use of GenAI in judicial and quasi-judicial bodies across the country.
“Yes, yes, we have seen our morphed pictures too,” the CJI said during the hearing. Addressing counsel for the petitioner, the CJI further asked, “You want it to be dismissed now or should we see it after two weeks?”
The bench then adjourned the matter for hearing after two weeks.
The petition, filed by advocate Kartikeya Rawal, has sought the enactment of a uniform legislative or policy framework to ensure the regulated adoption of GenAI in court processes. The plea has argued that while conventional AI functions within set parameters, Generative AI has the ability to create new content, including non-existent case law, which may mislead courts.
According to the petitioner, the “black box” nature of GenAI and its reliance on unsupervised neural networks could result in “hallucinations”, leading to fabricated legal references, biased outputs, or unpredictable interpretations that may undermine the principles of justice under Article 14 of the Constitution.
The plea states, “The skill of GenAI to leverage advanced neural networks and unsupervised learning to generate new data, uncover hidden patterns, and automate complex processes can lead to hallucinations, resulting in fake case laws, AI bias, and lengthy observations. Such arbitrariness may not be based on precedent but on a law that might not even exist.”
It further warns that GenAI systems may replicate or aggravate societal biases, especially where training data reflects discriminatory social patterns, thereby posing ethical and legal challenges.
The petition contends that if AI tools are to be integrated into judicial functions, transparency in data sources and accountability in data ownership must be ensured to avoid prejudice against marginalised communities.
The matter will now be taken up for further consideration in two weeks.
