Skip to main content

India plans to mandate visible labels on AI-generated content, becoming one of the first major countries to propose quantifiable standards for identifying artificial media. The move aims to curb the growing threat of deepfakes, misinformation, and election manipulation, the government said Wednesday.

Under the draft proposal, AI and social media companies — including OpenAI, Google, Meta, and X — must ensure that AI-produced visuals, videos, and audio clips carry a clear label across 10% of the surface or time duration, making it instantly recognizable to users.

The Information Technology Ministry also wants companies to ask users for declarations about whether their uploads are AI-generated and to employ automated labelling and metadata-tracing systems. Public consultation will run until November 6.

“The misuse of generative AI tools to spread misinformation or impersonate individuals has grown significantly,” the government said, calling for transparency and accountability across platforms.

Experts say the proposal would require AI firms to build detection at the point of creation, setting a precedent for global regulation. Dhruv Garg of the Indian Governance and Policy Project described it as “a bold first attempt to make AI transparency measurable.”

The announcement comes amid a surge of legal action over deepfake videos, including lawsuits by Bollywood actors Abhishek and Aishwarya Rai Bachchan against content that used their likeness without consent.

India is an increasingly important market for the AI industry. OpenAI’s Sam Altman said earlier this year that India has become the company’s second-largest user base, with engagement tripling since 2023.