X

YouTube to strike AI-generated deepfakes involving violence soon

Featured image for YouTube to strike AI-generated deepfakes involving violence soon

YouTube is implementing new measures to tackle the rise of AI-generated deepfakes that “realistically simulates” deceased minors or victims of violent events describing their deaths. The policy change, set to take effect on January 16, aims to address instances where AI is used to recreate the likeness of deceased or missing children.

True crime content creators have been leveraging AI technologies to provide child victims of high-profile cases with a synthetic “voice” to narrate the circumstances of their deaths. The move comes in response to disturbing AI narrations of cases like the abduction and death of James Bulger, Madeleine McCann’s disappearance, and the torture-murder of Gabriel Fernández.

Advertisement
Advertisement

YouTube will remove content with AI-generated deepfakes that violates the new policies, and users receiving a strike will face a one-week restriction on uploading videos, live streaming, or stories. Persistent offenders with three strikes will have their channels permanently removed from the platform. This initiative is part of YouTube’s broader efforts to curb content that violates the harassment & cyberbullying policies of the platform.

Creators will need to disclose when they use altered or synthetic content that appears realistic

The platform introduced updated policies around responsible AI content disclosures a couple of months earlier, along with tools to request the removal of deepfakes. Users will need to disclose when they create altered content that appears realistic, with non-compliance risking penalties such as content removal, suspension from the YouTube Partner Program, or other disciplinary actions. The shift also mentioned that the platform will remove certain AI generated content if it portrays “realistic violence,” even if labeled appropriately.

The move aligns with broader industry trends addressing the responsible use of AI-generated content. In September 2023, TikTok introduced a tool for creators to label their AI-generated content following an update to its guidelines requiring the disclosure of synthetic or manipulated media depicting realistic scenes.

TikTok retains the authority to take down AI-generated images that lack proper disclosure. Both YouTube and TikTok’s measures reflect the growing awareness and concerns surrounding the potential misuse of AI technologies, particularly in sensitive and potentially harmful contexts such as the realistic portrayal of violence or the exploitation of tragic events. Meta also updated their policy towards the end of last year to counter deepfake ads during the 2024 election.