The New Digital Frontier: Governing Deepfakes And Synthetically Generated Content Under New IT Rules

The New Digital Frontier: Governing Deepfakes And Synthetically Generated Content Under New IT RulesThe rapid expansion of digital technologies, particularly artificial intelligence (AI) systems capable of producing highly realistic audio, video, and image-based content, has created new challenges for online safety, authenticity, and public trust. These developments prompted the Government of India to amend and expand the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to directly address the creation, circulation, and misuse of synthetically generated, manipulated, and fabricated information, and ensure that the digital space remains safe, transparent, and governed by responsible practices.

Introduced on October 22, 2025, the draft Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 recognise that digital intermediaries increasingly serve as powerful distribution channels for AI-generated content, which can be weaponised for misinformation, impersonation, fraud, harassment, and large-scale manipulation of public opinion. As a result, draft Amendments explicitly define synthetically generated information and impose obligations on intermediaries to detect, label, restrict, and promptly act upon such content to ensure a safer digital environment.

Objectives of the Draft Amendments

The core objectives of the draft Amendments are to monitor AI-generated content, including deepfakes and digital content, such as news and OTT platforms, and to ensure higher due diligence by intermediaries, particularly major platforms with large user bases. Additionally, the amendments aim to establish a robust framework for labelling and marking any synthetically generated information.

Synthetically Generated Information

To ensure clarity and enforceability, the draft Amendments contain a new definition of “synthetically generated information”. “Synthetically generated information” is defined as information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true. This definition encompasses deepfakes, voice clones, AI-generated videos, synthetic images, and other forms of computationally produced data that an average user may perceive as real.

By establishing the definition, this draft creates a regulatory foundation for managing AI-generated content risks within the larger governance framework for intermediaries and digital publishers.

Due Diligence Duties of Intermediaries

The proposed draft requires intermediaries to observe additional due diligence measures intended to maintain a safe and lawful digital environment with regard to synthetically generated information.

This is in line with the requirements of the Digital Personal Data Protection Act, 2023 read with the Digital Data Protection Rules, 2025 that mandates that Significant Data Fiduciaries are required to do a due diligence to verify that technical measures including algorithmic software adopted by it for hosting, display, uploading, modification, publishing, transmission, storage, updating or sharing of personal data processed by it are not likely to pose a risk to the rights of Data Principals.

 

Obligations in Context of AI-Generated Content: The document places obligations on intermediaries to ensure that their platforms are not used for disseminating harmful or misleading synthetically generated information. Intermediaries must publish clear terms of service and user agreements that prohibit users from hosting or sharing any content that misrepresents the identity of a person or falsely appears to depict a real individual engaging in acts or statements they did not make. The inclusion of synthetically generated information within prohibited categories ensures that deepfakes or fabricated visuals cannot be shared thoughtlessly and without ramifications. Intermediaries are required to inform users not only about illegal or harmful content, but also about the responsibilities related to uploading or distributing manipulated information that could mislead viewers or violate their privacy.

Labelling of Synthetically Generated Content: Intermediaries must ensure that their platform interfaces clearly indicate that certain content is synthetically generated. The implicit expectation is that platforms adopt clear labelling practices, ensuring that synthetic media, when allowed, does not deliberately mislead users. Although the draft Amendments do not mandate prescriptive technical standards, they establish the foundational principle that users must not be deceived by fabricated media, thereby directing intermediaries to adopt suitable measures.

The draft Amendments require that intermediaries providing computer resources for the creation or modification of synthetically generated information must implement measures to ensure that all such information is clearly labelled or embedded with a permanent, unique label, metadata or identifier. Specifically, for visual content, this labelling must cover at least 10 per cent of the total surface area to ensure visibility and recognition.

In the case of audio content, the labelling requirement mandates that it must be present during the initial 10 per cent of the total duration, allowing listeners to quickly identify the nature of the content. This label or identifier must facilitate the immediate and unambiguous identification of the content as synthetically generated information, thereby promoting transparency and accountability in digital media creation. These guidelines aim to empower users and consumers with the clarity needed to discern the authenticity and origin of the content they engage with.

Additional Requirements for Significant Social Media Intermediaries

Significant social media intermediaries (SSMIs) have an expanded set of obligations, reflecting their large user base and consequential impact on public discourse.

Detection of Synthetically Generated Information: SSMIs bear greater responsibilities due to their scale and potential to amplify synthetically generated information. They must establish robust systems to detect and mitigate the spread of deepfakes, fabricated media, or digitally manipulated visuals. The draft Amendments require them to employ automated tools, wherever feasible, to identify such content, especially in areas where synthetic media poses high risks, such as elections, public health, inter-community harmony, or personal reputation. At the same time, this draft mandates human oversight to avoid unjustified removal of legitimate creative or satirical content, ensuring that content moderation reflects proportional and context-sensitive decision-making.

Monthly Compliance Reports: The monthly compliance reports by SSMIs must also specify the number of content pieces or communication links, details of synthetically generated or manipulated information that were removed proactively, the categories flagged by automated tools, and the actions taken on user grievances related to synthetic media.

Traceability of First Originator: Messaging services identified as SSMIs must enable identification of the “first originator” of a message when required for criminal investigations by court order or by government authority under Section 69 of the IT Act. The ability to trace the source becomes critical in curbing malicious use of AI-generated material. While the draft Amendments focus on origin tracing only under narrow legal circumstances, they highlight the growing need to hold creators and distributors of harmful synthetic media accountable.

User Verification and Marking: SSMIs are expected to provide users with tools to easily identify verified accounts, which becomes crucial in an environment where synthetically generated content can impersonate real individuals. By giving users visibility into whether an account is genuine, the draft Amendments help reduce the likelihood of deepfake-based impersonation gaining credibility.

Notification Before Content Removal: The draft Amendments further require intermediaries to notify users and provide reasons before removing content on their own accord, particularly when it involves AI-generated content. They must also offer users an opportunity to appeal such actions. This transparency requirement helps users understand the distinction between permissible digital creativity and harmful synthetic media.

Non-Compliance and Loss of Safe Harbour

If an intermediary fails to fulfil its obligations under the draft Amendments, it loses the legal immunity provided under Section 79 of the IT Act, which normally shields intermediaries from liability for third-party content. Losing this protection implicates the intermediary directly for unlawful content posted by users.

Impact and Significance of the Draft Amendments

The draft Amendments are designed to achieve a crucial balance between promoting free expression and enforcing responsible governance of AI-generated, synthetic content. Users are safeguarded against harmful content, along with access to appellate mechanisms that provide routes for appeal if they feel their concerns are inadequately addressed. Under these draft Amendments, digital platforms, often referred to as intermediaries, are no longer allowed to function as passive carriers of information. They are required to take proactive steps in moderating content, including implementing robust systems for user verification to prevent anonymity in cases of abuse and ensuring prompt action against synthetically generated content. This shift necessitates that platforms develop comprehensive policies and technologies to monitor and manage the vast array of information shared on their sites.

Overall, the draft Amendments set the tone for a comprehensive framework which enhances digital governance while protecting the rights of users, holding platforms accountable, and ensuring a safe and ethical online ecosystem.

Conclusion

The draft Amendments seek to be the building block for a comprehensive regulatory framework that governs intermediaries, digital publishers, social media platforms, and online gaming services in India. By defining synthetic information and by integrating these concepts into intermediary obligations, grievance mechanisms, and transparency requirements, the draft Amendments acknowledge the disruptive potential of AI-generated content while establishing safeguards to mitigate harm.

Although the draft does not restrict the creative or innovative use of AI-driven media, it imposes a clear duty on intermediaries and publishers to ensure that synthetic media is not used deceptively, unlawfully, or in a manner that compromises individual rights or public order. As technology rapidly evolves, provisions related to synthetically generated information will remain central to maintaining trust, authenticity, and accountability in India’s digital ecosystem. As India continues to expand its digital footprint, these draft Amendments will play a critical role in shaping the future of online interactions, content delivery, and digital governance.

Authors: Srinjoy Banerjee and Shivi Gupta

First Published by: Mondaq here