Warning: count(): Parameter must be an array or an object that implements Countable in /web/qlc/nishith.tv/htdocs/wp-content/themes/Video/library/functions/custom_functions.php on line 702

Warning: count(): Parameter must be an array or an object that implements Countable in /web/qlc/nishith.tv/htdocs/wp-content/themes/Video/library/functions/custom_functions.php on line 702
Technology Law Analysis: Proposed Rules for AI-Generated Content Amid Deepfake Concerns: Impact on Platforms and User Experience

Technology Law Analysis: Proposed Rules for AI-Generated Content Amid Deepfake Concerns: Impact on Platforms and User Experience

Posted by By at 31 October, at 23 : 13 PM Print


Warning: count(): Parameter must be an array or an object that implements Countable in /web/qlc/nishith.tv/htdocs/wp-content/themes/Video/single_blog.php on line 46

Warning: count(): Parameter must be an array or an object that implements Countable in /web/qlc/nishith.tv/htdocs/wp-content/themes/Video/single_blog.php on line 52
October 31, 2025

Proposed Rules for AI-Generated Content Amid Deepfake Concerns: Impact on Platforms and User Experience

 


  • Draft amendments to the IT Rules, 2021 propose to introduce due diligence measures for intermediaries who enable or facilitate AI generated or modified content hosted on their platform.
  • ‘Synthetically Generated Information’ to be prominently labelled or embedded with a permanent unique metadata or identifier, covering at least 10% of the content.
  • ‘Significant Social Media Intermediaries’ to take user declarations, deploy tools to verify synthetically generated content and ensure labelling.
  • Stakeholder feedback invited by MeitY until November 06, 2025.

Background

On October 22, 2025, the Ministry of Electronics and Information Technology (MeitY) released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021) to propose additional due diligence obligations for online intermediaries relating to ‘synthetically generated information’ hosted on their platforms.

According to the explanatory note1 (Explanatory Note), these proposed amendments respond to concerns over the growing misuse of technologies to create deepfakes and other synthetic media on social platforms, which are being used to spread misinformation, damage reputations, manipulate or influence elections or commit financial fraud.

Proposed Amendments

Scope of Synthetically Generated Content (SGI)

The proposed Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (Draft Amendments)2 define SGI as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.3

Since the definition depends on whether content “reasonably appears” to be authentic or true, it allows scope for subjective interpretations and the risk of misclassification based on perception rather than fact. Clear cut parodies may not constitute SGI and may not be subject to the requirement of being labelled or embedded with a permanent unique metadata or identifier, though satire, adaptations, re-enactments and embellished content may be subject to ambiguity. Onus may be placed on intermediaries to make such content determinations.

Further, by classifying (and labelling) content as SGI, based on the test of whether such information appears to be reasonably authentic or true, it may portray a sense of veracity or truthfulness to the user when in reality, it may not be verified or fact-checked.

Further, while deepfakes ordinarily entail audio and visual manipulation, the definition could also be read to include text-based or other routine digital content. This could include, for example, text generated or corrected using AI tools or autocorrect, minor image edits such as filters, as well as virtual reality (VR) content created to resemble reality. As most online content involves some level of algorithmic creation or modification, this broad framing may result in accurate or legitimate information being treated as SGI merely because of the tools used to produce it. In several cases, this may extend beyond the intended purpose of the Draft Amendments.

‘Information’ under the IT Rules, 2021 to include SGI

This amendment clarifies that all existing references to “information” under the IT Rules, 2021, when used to commit an unlawful act, will now also include SGI. Obligations under Rule 3(1)(b) (taking reasonable efforts to prevent users from posting harmful content), Rule 3(1)(d) (removing content upon government notification or court order), and Rules 4(2) and 4(4) (additional due diligence obligations for SSMIs, including tracing and monitoring certain unlawful content) now explicitly extend to SGI. This means that Intermediaries must handle SGI used in unlawful acts the same way they handle any other content hosted on their platform.

Due Diligence Obligations on Intermediaries

Intermediaries that enable, permit, or facilitate creating, generating, altering or modifying information as SGI are required to ensure that such SGI is prominently labelled or embedded with a ‘permanent unique label, metadata or identifier’ (Identifier) to identify that such information is SGI that has been created, generated, modified or altered using the computer resource of the intermediary. The Identifier must be prominently visible or audible, such that:

  1. The Identifier must cover at least ten percent (10%) of the surface area of the visual display, and
  2. In the case of audio content, it must be audible for the first ten percent (10%) of audio duration,4

The intermediary must also not enable the modification, suppression, or removal of the Identifier.5

The obligation on an intermediary appears to be limited to labelling or embedding metadata or identifiers in SGI created or modified using its own computer resource. It may therefore be interpreted that an intermediary is not required to label SGI that was created elsewhere and merely hosted on its platform.

Labelling and Identifiers

Moreover, the rationale behind requiring 10% of the content to be covered by the Identifier is not specified and appears arbitrary. Applying a uniform threshold across varying lengths and formats of content may be impractical and could disrupt user experience. For example, a 10-minute-long audio clip would require the Identifier to play for the entire first minute. A social media display picture constituting SGI may need to have 10% of its surface area labelled as such.

Given the wide range of benign content that may qualify as SGI (as discussed above), such broad labelling could also contribute to desensitisation or notification fatigue among users. Accordingly, the blanket requirement for labelling or embedding SGI with a permanent unique metadata or identifier may be replaced with a risk-based and format-appropriate approach.

Visible or audible labels may be mandated for ‘high-risk’ SGI, while ‘low-risk’ SGI may be embedded with permanent metadata or identifiers that may be machine-readable but may not be visible to the human eye.

The coverage and presentation of labels should be determined taking into account the nature, duration, and format of the content, such that users are effectively informed that the information is synthetically generated without disrupting their overall viewing or user experience.

Finally, there are no technical standards or frameworks prescribed for embedding such Identifiers. Absent clear implementation protocols, intermediaries may adopt inconsistent or incompatible labelling practices, undermining the regulatory objective.

Additional Due Diligence Obligations for Significant Social Media Intermediaries (SSMIs)

The Draft Amendments introduce new obligations for SSMIs6 with respect to SGI hosted on their platforms. For content displayed, uploaded, or published on their platforms, SSMIs will need to:

  • Require users to declare whether the information they upload or publish is SGI.

    Notably, the draft does not prescribe any penalty for users who fail to make or accurately disclose such declarations.

  • Implement ‘reasonable and appropriate’ technical measures (including automated tools or other suitable mechanisms) to verify the accuracy of user declarations based on the nature, format and source of the information uploaded / sought to be uploaded.

    However, the requirement to use “reasonable and appropriate” technical measures is not clearly defined. What counts as reasonable may differ from one platform to another, depending on factors such as their size, available technology, and resources.

  • Where verification or technical confirmation establishes that the information is synthetically generated, the SSMI must ensure that such content is clearly and prominently labelled with an appropriate notice indicating that it is synthetically generated.7

Further, it is proposed that if an SSMI becomes aware or is found to have knowingly permitted, promoted, or failed to act upon SGI in violation of these requirements, it will be deemed to have failed to exercise due diligence.8 It is further clarified that the SSMI is responsible for taking ‘reasonable’ and ‘proportionate’ technical measures to verify the correctness of user declarations and to ensure that no SGI is published without such appropriate declaration or label as required.9

While the Draft Amendments aim to enhance transparency and accountability in the dissemination of AI-generated or modified content, the obligation to verify and label SGI raises significant practical and operational issues. Platforms may face constraints arising from the technical limits of automated analysis, challenges in verifying the source or authenticity of uploaded content or whether it is intended to be authentic or true, and potential privacy or data-access concerns. Moreover, it is unclear whether an SSMI’s obligation would be deemed fulfilled merely by implementing such measures, and how liability would be assessed if a technical tool fails to accurately verify a user declaration.

Additionally, the threshold for when an SSMI can be deemed to have “become aware” or to have “knowingly permitted” the publication of SGI is unclear, making this obligation excessively broad. It could encompass a wide range of situations, such as the platform discovering content on its own, or being alerted through user complaints, court orders, or government notifications. For example, if a user flags a post as potentially AI-generated, it is unclear whether the platform is considered “aware” as soon as the report is received, or only after it has investigated and confirmed that the content is indeed synthetically generated.

Suggested Approach

The Draft Amendments aim to address deepfakes and misinformation by expanding the due diligence obligations for intermediaries. However, instead of regulating SGI through intermediaries, regulatory focus must be placed on the actual generators, publishers, or bad actors responsible for such content. Generative AI models and platforms do not appear to have been placed under similar obligations as intermediaries under the Draft Amendments. Placing the burden of regulation solely on intermediaries that primarily play the role of a conduit or host for information may dilute accountability and complicate enforcement. Such labelling and Identifiers may also hamper user experience in consuming online content. Adopting a risk-based appropriate approach (based on nature, duration, and type of SGI) for labelling and embedding Identifiers in SGI may be a preferred approach for both platforms and users.

The Draft Amendments adopt a broad scope, both in defining SGI and in identifying intermediaries that may be subject to regulation. A phased implementation roadmap and clear guidance may assist stakeholders in ensuring compliance in a practical and feasible manner.

As on the date of this piece, the Draft Amendments remain open for public consultation until November 06, 2025.10

 

Authors

Sanjana ShrivastavPrerana Reddy and Aaron Kamath

You can direct your queries or comments to the relevant member.


1Explanatory note to the Draft Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in relation to synthetically generated information, available at:

https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf (last accessed on Oct 30, 2025).

2Draft Amendments, available at:

https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf.

3Rule 2(i), Draft Amendments.

4Rule 4, Draft Amendments.

5Ibid.

6Section 2(1)(v) of the IT Act defines an SSMI as a social media intermediary having number of registered users in India above such threshold as notified by the Central Government. Under MeitY’s notification F. No. 16(4)/2020-CLES dated 25th February 2021, the threshold for an intermediary to qualify as an SSMI is 50 lakh registered users in India.

7Rule 5, Draft Amendments.

8Ibid.

9Ibid.

10The feedback/comments on the draft rules may be submitted via email to MeitY in a rule wise manner in MS Word or PDF format, at itrules.consultation@meity.gov.in by November 06, 2025.


Disclaimer

The contents of this hotline should not be construed as legal opinion. View detailed disclaimer.

This hotline does not constitute a legal opinion and may contain information generated using various artificial intelligence (AI) tools or assistants, including but not limited to our in-house tool, NaiDA. We strive to ensure the highest quality and accuracy of our content and services. Nishith Desai Associates is committed to the responsible use of AI tools, maintaining client confidentiality, and adhering to strict data protection policies to safeguard your information.

This hotline provides general information existing at the time of preparation. The Hotline is intended as a news update and Nishith Desai Associates neither assumes nor accepts any responsibility for any loss arising to any person acting or refraining from acting as a result of any material contained in this Hotline. It is recommended that professional advice be taken based on the specific facts and circumstances. This hotline does not substitute the need to refer to the original pronouncements.

This is not a spam email. You have received this email because you have either requested for it or someone must have suggested your name. Since India has no anti-spamming law, we refer to the US directive, which states that a email cannot be considered spam if it contains the sender’s contact information, which this email does. In case this email doesn’t concern you, please unsubscribe from mailing list.

Hotline

Related Posts

Post Your Comment

You must be logged in to post a comment.

About Us

Nishith Desai Associates (NDA) is a research based international law firm with offices in Mumbai, Bangalore, Silicon Valley, Singapore, New Delhi, Munich and New York.

Links

Mobile App

.