# Tags
#Technology

YouTube cracks down on AI content that ‘realistically simulates’ deceased children or victims of crimes

Share this article

YouTube is updating its harassment and cyberbullying policies to prohibit content that “realistically simulates” the deaths of minors or victims of deadly or violent events. The new policy aims to address the use of artificial intelligence by some true crime content creators to recreate the likenesses of deceased or missing children, giving them a childlike “voice” to describe their deaths.

Courtesy Photo 

YouTube is updating its harassment and cyberbullying policies to prohibit content that “realistically simulates” the deaths of minors or victims of deadly or violent events. The Google-owned platform says it will start removing such content on January 16.

The policy shift comes as some true crime content creators use artificial intelligence to recreate the likenesses of deceased or missing children. In these disturbing cases, people are employing artificial intelligence to provide child victims of high-profile cases with a childlike “voice” to describe their deaths.

In recent months, content creators have used AI to narrate a number of high-profile cases, including the kidnapping and death of James Bulger, a British two-year-old, as reported by The Washington Post. Similar AI narratives exist for Madeleine McCann, a British three-year-old who vanished from a resort, and Gabriel Fernández, an eight-year-old boy who was tortured and murdered in California by his mother and her boyfriend.

Content that violates the new policies will be removed from YouTube, and users who receive a strike will be unable to upload videos, livestreams, or stories for one week. After three strikes, the user’s YouTube channel will be permanently removed.

The new changes come nearly two months after YouTube introduced new policies regarding responsible disclosures for AI content, as well as new tools for requesting deepfakes to be removed. One of the changes requires users to disclose when they’ve created realistic-looking altered or synthetic content. The company has warned that users who do not properly disclose their use of AI will face “content removal, suspension from the YouTube Partner Program, or other penalties.”

Furthermore, YouTube stated at the time that some AI content may be removed if it is used to depict “realistic violence,” even if it is labeled as such.

TikTok launched a tool in September 2023 that allows creators to label their AI-generated content after the social app updated its guidelines to require creators to disclose when they post synthetic or manipulated media that depicts realistic scenes. TikTok’s policy allows it to remove non-disclosed realistic AI images.

Leave a comment

Your email address will not be published. Required fields are marked *