Taming the animal called AI in its many avatars will clearly remain a work in progress.

A few months ago, Raanjhanaa, a 2013 romance set against the backdrop of student politics, landed in cinemas again.
But there was a big difference: In the original, the hero -- Kundan, played by Dhanush -- dies.
In this version, Kundan miraculously opens his eyes as his love interest, Zoya, played by Sonam Kapoor, sits by his hospital bed.
The new happy ending was cooked up entirely by artificial intelligence.
Though the re-released version of the film barely moved the box office, it stirred a controversy -- the new climax wasn't imagined by either Director Anand L Rai or Writer Himanshu Sharma.
The use of AI to create an audio or video clip that appears 'reasonably authentic' raises larger questions about the potential of the technology to reimagine movies, plots, books, movie characters, and music.
The anxiety is not unfounded: The other side of 'reimagine' is misuse.
You could put words in the mouth of a political leader, for instance.
Over the last 12 to 18 months, the audio and video generation capabilities of general purpose large language models (LLMs) and special purpose small language models (SLMs) -- the difference is in size and ease of deployability -- have increased manifold.
Such clips, loosely classified as deepfakes under the broad umbrella of 'synthetically generated content', have triggered action from courts as well as the government.
Several high courts across the country have passed omnibus orders protecting the rights of a number of actors.
Though courts recognised the right to identity as far back as 1994, the threat posed by AI and deepfakes has made these protections far more urgent.
"The rise of AI has necessitated the courts' intervention to protect the identity and publicity rights of celebrities in order to prevent misuse of their personas in ads, on digital platforms, and in AI-generated content," said Niharika Karanjawala Misra, principal associate at law firm Karanjawala & Co.
The ministry of electronics and information technology came out with a draft of the latest amendment to the Information Technology Rules of 2021 and prescribed several measures to check deepfakes.
The government hopes to notify the rules by the end of this year.
In the draft, released on October 22, the ministry has suggested that all users, irrespective of the tool they use to generate synthetic content and the public platform they upload it on, declare clearly that AI was used to generate that content.
All such labels and disclaimers for AI-generated content must cover at least 10 per cent of the 'surface area' of the content being displayed.
In cases where the AI-generated content also has audio, the label or disclaimer should be present during the initial 10 per cent of the content's duration.
The policy move was necessitated as there was constant feedback from stakeholders across all forums, including parliamentarians, on the problem of deepfakes, Union Electronics and Information Technology Minister Ashwini Vaishnaw said.
Though the implementation of these rules could be a challenge, the intent of the draft seems positive, according to legal and policy experts.
"There is an increasing trend in AI deepfake attacks that clone someone's voice, appearance or otherwise. Given the 1 billion internet users in India, this is concerning and could have widespread ramifications," said Raja Lahiri, a partner at Grant Thornton Bharat.
Artistic expression or distortion?
Another crucial aspect of deepfake and synthetic content is the blurring of the thin line that separates artistic reimagination and unlawful distortion, legal experts said.
For example, in the case of Raanjhanaa and other films, copyright vests ownership of the film with the producer.
However, directors and actors retain only limited or undefined moral or performer's rights, said Moksha Kalyanram Abhiramula, managing partner at law firm La Mintage Legal LLP.
With the price of AI computing set to fall, celebrities, including actors, politicians and sportstars, are likely to be made the targets of deepfake videos that are bound to confuse the public, other experts said.
"India is a unique country where celebrities are treated as larger than life, and their personalities carry huge commercial value. "If misuse goes unchecked, their endorsement model loses value," said Safir Anand, senior partner at law firm Anand and Anand.
Beyond just likeness, deepfake audio and videos also copy distinguishable actions, tone, and tonality, Anindita Jaiswal Jaishiv, associate professor at BITS Law School, said.
"The other is the expanse of protection that is no longer confined to name or face likeness, but extends to any identifiable attribute like gestures or artefacts," she said.
The need to protect personal rights will accelerate further as a direct response to generative AI, which will fuel mass production of deepfakes and unauthorised commercial exploitation, Abhiramula said.
By ensuring there are clear and non-removable identifiers on AI-generated content, users will at least be able to make informed choices about the content they are engaging with, experts said.
"The next step should be to establish clear implementation standards and collaborative frameworks between the government and industry, to ensure the rules are practical, scalable, and supportive of India's AI leadership ambitions," said Mahesh Makhija, partner and technology consulting leader at EY India.
Some experts said the unchecked growth of AI tools that allowed for easy creation of synthetic audio and video content is set to be curbed despite challenges in the implementation of the rules.
"Securing a (court) order will certainly help to quickly take down unauthorised content.
"The threat of legal consequences also disincentivises people from circulating such material," Misra said.
Other experts, however, said that the government will need to clarify several parameters for seamless and quick implementation of these rules.
"The law is well intentioned but, as always, there will be challenges with implementation.
"Enterprises will need to add processes, and likely cost, due to having to label all AI-generated content that they produce correctly," Akif Khan, vice-president and analyst at Gartner, said.
Khan also said that some clarifications will also be needed on whether the government will be performing its own analysis on published content to check if labelling is correct or not.
The challenges
"If, however, the SSMI (significant social media intermediaries) does not label such content after becoming aware of its nature, then they risk losing safe harbour.
"The challenge comes with tying the obligations with safe harbour protection," Kazim Rizvi, founding director of technology public policy think-tank The Dialogue, said. SSMIs are any social media platforms with five million users or more.
Another implementation challenge could be asking platforms to certifiably verify user declarations on synthetic content, especially against sophisticated deepfakes, Rahul Sundaram, partner at law firm IndiaLaw LLP, said.
"While larger companies might absorb the costs, these obligations could place a significant burden on smaller Indian platforms, potentially putting them at a disadvantage in the market," Sundaram said.
The rules also let off users with no disincentive for false declarations, while placing the entire onus on the platforms, thereby creating an enforcement asymmetry that punishes platforms for user behaviour they cannot fully control, he said.
Taming the animal called AI in its many avatars will clearly remain a work in progress.
Feature Presentation: Ashish Narsale/Rediff