Govt tightens rules on AI-generated content, deepfakes

2 Minutes ReadWatch on Rediff-TV Listen to Article
Share:

February 10, 2026 17:54 IST

x

Key changes include treating synthetic content as 'information'. AI-generated content will be treated on par with other information for determining unlawful acts under IT rules.

IMAGE: Kindly note that this image has been posted for representational purposes only. Photograph: Dado Ruvic/Reuters

The government on Tuesday brought in stricter obligations for online platforms on handling AI-generated and synthetic content, including deepfakes, saying platforms such as X and Instagram must take down within three hours any such content flagged by a competent authority or courts.

 

The government notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, that formally define AI-generated and synthetic content. The new rules will come into force from February 20, 2026.

Key Points

  • X, Insta must take down flagged content within 3 hours
  • New rules come into force from Feb 20, 2026
  • Synthetic content will be treated at 'information'

The amendments define "audio, visual or audio-visual information" and "synthetically-generated information", covering AI-created or altered content that appears real or authentic. Routine editing, accessibility improvements, and good-faith educational or design work are excluded from this definition.

Key Changes

Key changes include treating synthetic content as 'information'. AI-generated content will be treated on par with other information for determining unlawful acts under IT rules.

Platforms must act on government or court orders within three hours, reduced from 36 hours, according to a gazette notification issued by ministry of electronics and information technology (MeitY).

User grievance redressal timelines have also been shortened.

Labelling of AI content made mandatory

The rules require mandatory labelling of AI content. Platforms enabling creation or sharing of synthetic content must ensure such content is clearly and prominently labelled and embedded with permanent metadata or identifiers, where technically feasible, it said.

Calling for ban on illegal AI content, it said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.

Intermediaries cannot allow removal or suppression of AI labels or metadata once applied, it said.

Share: