Tech bodies urge MeitY to rethink deepfake rules to prevent confusion and heavy compliance

Key Points:

  • MeitY deepfake rules raise concerns among tech industry groups over broad scope and high compliance costs.

  • MeitY deepfake rules could unintentionally cover harmless digital edits or filters used by creators.

  • MeitY deepfake rules may burden intermediaries and lead to overlapping content regulation frameworks.


MeitY deepfake rules – Industry calls for clarity and narrower definitions

The MeitY deepfake rules have become a major talking point across India’s technology sector. Leading industry bodies including the Internet and Mobile Association of India (IAMAI), Nasscom, and the Business Software Alliance (BSA) have urged the Ministry of Electronics and Information Technology (MeitY) to revisit and refine its proposed deepfake regulations under the Information Technology Rules, 2021.

The industry’s key concern is that the draft rules, though well-intentioned to curb misinformation and synthetic content, are currently too broad in scope. The framework could end up classifying ordinary or harmless edits as “synthetic content,” making compliance nearly impossible for smaller creators and platforms. These groups have requested that MeitY define “synthetically generated information” more precisely, ensuring that legitimate uses of AI for creativity, entertainment, and business are not unfairly restricted.

The MeitY deepfake rules aim to tackle the growing problem of AI-generated misinformation — particularly deepfakes that manipulate visuals or audio to mislead people. However, the industry has warned that a sweeping interpretation of such content could impact millions of users who rely on basic editing tools, AI filters, or even marketing design software. Without clear differentiation, simple visual enhancements could fall under the same category as malicious deepfakes designed to deceive.


MeitY deepfake rules – Definitions raise confusion and concern

According to the draft amendments released by MeitY on October 22, 2025, “synthetically generated information” (SGI) refers to any content that is artificially or algorithmically created, generated, or modified. It further mandates that intermediaries hosting such content must label or identify it, either through visible disclosures or embedded metadata. The MeitY deepfake rules also require these platforms to make “reasonable efforts” to ensure that users are informed whenever they interact with AI-generated or modified content.

While this may sound straightforward, tech bodies argue that the definition of SGI is currently too vague. They warn that the rules could unintentionally include everything from light photo filters to video transitions, memes, or harmless creative edits. For instance, an artist enhancing an image using AI color correction or a user applying beauty filters on social media might technically fall under the “synthetic” category.

IAMAI and Nasscom have emphasized that India’s digital economy thrives on creativity, innovation, and user engagement — much of which involves minor content modifications. If every such piece of content requires labeling or compliance checks, it would not only slow down digital operations but also create unnecessary fear among creators. The MeitY deepfake rules, in their current draft, could therefore make digital compliance extremely heavy, especially for startups, small firms, and social media creators who may lack the resources to meet these standards.


MeitY deepfake rules – Risk of overregulation and compliance overload

The MeitY deepfake rules have also sparked conversations about the potential overlap of content regulation frameworks in India. Currently, online intermediaries already follow a series of obligations under the IT Rules, 2021 — including grievance redressal, user verification, and content takedown protocols. Adding another layer of deepfake labeling could result in a complex and fragmented regulatory landscape.

Industry experts warn that overregulation could reduce India’s attractiveness as a global digital innovation hub. If compliance becomes too demanding, companies may scale back operations or avoid developing new AI-powered tools altogether. IAMAI has suggested that instead of applying the same rule to all kinds of AI-modified content, MeitY should focus on intent and impact — distinguishing between harmful manipulation and harmless enhancement.

Moreover, platforms hosting millions of user-generated posts daily would find it nearly impossible to manually verify or label every single image, clip, or piece of text. This would require expensive automated tools and additional manpower, driving up operational costs. The MeitY deepfake rules could therefore unintentionally create a heavy compliance burden that disadvantages Indian firms compared to their global counterparts.

BSA, representing global software and tech companies, has also raised a concern about interoperability. If India’s labeling framework differs too much from international standards, it could complicate data exchange and content moderation processes across regions. The body recommends that MeitY align its rules with global AI governance frameworks to ensure consistency and ease of implementation.


MeitY deepfake rules – Call for collaboration and balanced governance

As the last date for public feedback approaches — recently extended till November 13 — industry representatives continue to engage with MeitY in the hope of building a balanced and practical regulatory model. The MeitY deepfake rules, they agree, are a necessary step toward addressing deepfake misuse, but require refinement to avoid overreach. The tech community believes that rather than restricting innovation, India should adopt a collaborative framework that promotes responsible AI usage while protecting citizens from misinformation.

Several industry leaders have proposed that MeitY work closely with digital experts, AI researchers, and content creators to define risk categories for synthetic media. For instance, clearly malicious or deceptive content could fall under “high risk” and require labeling or removal, while artistic or entertainment-related AI edits could be considered “low risk” and exempt from heavy regulation. This approach would maintain creative freedom while ensuring accountability for misuse.

Nasscom’s submission reportedly highlights the need for transparency and user awareness rather than rigid compliance. It suggests educational initiatives that help people recognize and report harmful deepfakes instead of enforcing blanket restrictions. The MeitY deepfake rules, according to these bodies, must strike a balance between innovation and protection — ensuring that India’s growing AI ecosystem continues to flourish responsibly.

The discussion around synthetic content is not unique to India; many countries are drafting or testing similar policies to handle AI-generated material. However, experts agree that India’s decision will have a significant global impact, given its large digital population and rapidly growing AI economy. The next step will determine whether India becomes a leader in balanced AI governance or faces pushback for overregulation