Combating AI deepfakes: The face of fraud has no IP rights

Combating AI deepfakes: The face of fraud has no IP rightsThe rise of AI and deepfake technology is a challenge to industries and legal systems globally. Extremely realistic audiovisual manipulations generated by machine learning are being used to deceive and defraud. They erode trust, damage reputations, and lead to serious financial harm in finance, education, entertainment and politics.

India lacks comprehensive legislation specifically addressing deepfakes. However, the courts are providing relief through civil remedies such as personality rights protection, trademark enforcement and John Doe orders. In the recent landmark case of Ankur Warikoo and Anr v John Doe and Ors, Delhi High Court granted interim relief to Ankur Warikoo, a well-known personal finance educator and influencer. AI-generated deepfake videos using his unauthorised image, voice and likeness circulated on social media platforms, persuading unsuspecting viewers to join WhatsApp groups for fraudulent stock market advice. This case is one of the first in India to deal so directly with the misuse of deepfake technology in financial scams.

Proceedings were brought against the unidentified individuals, or John Does, who created and distributed the false content. A John Doe order allows courts to issue injunctions against unknown parties, which is particularly powerful against digital crimes and IP violations when the perpetrators cannot be immediately identified. Plaintiffs and authorities may act on circumstantial evidence and digital footprints.

Here, victims were encouraged to invest through obscure apps or digital accounts that were subsequently frozen, resulting in financial losses. The videos replicated Warikoo’s facial expressions, voice and brand in content indistinguishable from the authentic. Because Warikoo is a trusted voice in personal finance, his followers were particularly vulnerable to the deception.

Although the false content was reported through Meta’s brand rights protection portal and then to its cybercrime unit and the grievance appellate committee, many fraudulent posts remained active. The court was critical of Meta, the second defendant, for failing to remove the content in a timely manner.

The court issued orders to protect Warikoo’s personality rights and the business interests of his company, Zaan WebVeda Pvt Ltd. The unidentified defendants were restrained from misusing Warikoo’s likeness by means of any medium, including AI and deepfake technologies, for personal or commercial gain. Meta was directed to take down all infringing URLs within 36 hours and disclose all user details. The plaintiffs were granted leave to report future deepfake content, on which Meta must act promptly.

This case illustrates how AI-generated deepfakes are a serious threat not just to privacy and reputation but also to financial security. The technology, originally for entertainment and simulation, is being exploited for impersonation and fraud. Duplicating public figures multiplies criminal impact because scammers use their reputation as a tool for fraud. Individuals suffer real-world financial harm when, as in Warikoo’s case, followers mistakenly believe the personality endorses investment schemes.

These issues reflect wider concerns with current digital ecosystems. Intermediaries, such as social media platforms, must be quicker to respond to takedown requests. The failure of Meta to take action despite repeated complaints shows the need for greater accountability and more effective redress. Making the Department of Telecommunications and the Ministry of Electronics and Information Technology defendants signifies the increasing role of regulatory agencies in ensuring digital safety.

This case also highlights the need for robust legislation. As synthetic media becomes more realistic and easier to create, the law must evolve to provide swift and preventive remedies. Personality rights, previously confined to celebrities, require broader protection in the age of digital content creators, influencers and educators.

The judgment is a turning point in the legal response to AI and deepfake fraud. It proves the judiciary’s commitment to protect individuals from emerging technological threats and sets a precedent for future digital impersonation cases. This is a stark reminder that digital trust must be protected, not only by content creators but also by platforms, regulators and courts.

Authors: Manisha Singh and Kratika Patel

First Published by: IBLJ here