The Growing Threat of Deepfakes
Deepfake technology has emerged as a significant challenge in the digital landscape, raising concerns about misinformation, privacy violations, and security risks. As India grapples with the implications of synthetic media, the government has initiated discussions with major technology companies to understand their policies and measures to tackle this issue. In response to these concerns, tech giants such as Google, Meta, and X have engaged with policymakers to outline their strategies for combating manipulated media. While these companies have taken steps to address the problem, gaps remain in regulation, enforcement, and user protection.
Tech Giants’ Approach to Deepfakes
In January 2024, Google, Meta, and X participated in a stakeholder consultation meeting with the Indian government. The meeting, held by the Ministry of Electronics and Information Technology (MeitY), aimed to assess the preparedness of these platforms in handling deepfakes and other emerging technologies.
Google’s Policy on Deepfakes
Google representatives highlighted that the company has had a deepfake policy since November 2023. According to their policy, content creators are required to disclose synthetic content and label it appropriately. Furthermore, Google has implemented a mechanism for individuals to report instances where their persona is being misused in a deepfake, allowing for content removal in such cases. The tech giant also employs artificial intelligence (AI) tools to detect and eliminate harmful manipulative content.
Meta’s Efforts to Label AI-Generated Content
Meta, which introduced its AI labelling policy in April 2024, mandates that users disclose when uploading AI-generated content. This policy extends to advertisements, ensuring that viewers are informed if the media has been digitally altered. Meta representatives emphasized that their policies are technology-neutral, meaning enforcement applies regardless of whether the manipulation is explicitly a deepfake. Additionally, Meta is currently working on mechanisms to protect celebrity personas from unauthorized deepfake usage.
X’s Stance on Synthetic Media
X (formerly Twitter) outlined its “synthetic and manipulated media policy,” under which deceptive content is taken down. However, X emphasized that not all AI-generated content is inherently deceptive or harmful. The company urged regulators to differentiate between malicious deepfakes and creative or harmless AI-generated media. X asserted that only content deemed “extremely deceptive and harmful” should warrant strict intervention.
Regulatory Discussions and Next Steps
The consultation meeting underscored the necessity for clearer regulations on AI content disclosure, labelling standards, and grievance redressal mechanisms. The stakeholders collectively advocated for a regulatory framework focusing on curbing malicious actors rather than restricting innovative uses of deepfake technology.
As part of its mandate, the MeitY-constituted nine-member committee, established following an order from the Delhi High Court in November 2024, will continue engaging with technology companies, legal experts, and victims of deepfakes over the next three months. The objective is to develop comprehensive guidelines that balance technological innovation with user protection and ethical considerations.
A Call for Responsible AI Governance
While Google, Meta, and X have implemented measures to address deepfake concerns, gaps persist in enforcement and user protection. The evolving nature of synthetic media necessitates proactive regulatory frameworks that ensure transparency, accountability, and the safeguarding of individual identities. As India progresses in its efforts to regulate deepfakes, collaboration between tech companies, policymakers, and the public will be crucial in striking a balance between innovation and ethical responsibility. The upcoming recommendations from the MeitY committee will play a critical role in shaping the future of AI governance in India and beyond.
(With inputs from agencies)