YouTube will begin requiring creators to tag AI-generated content on its platform in the coming months, and the platform will inform users when they are viewing content created with artificial intelligence, the company said in a statement. blog post Tuesday.

YouTube will also allow people to request the removal of doctored videos “that simulate an identifiable individual, including their face or voice.”

«Not all content will be removed from YouTube and we will consider a variety of factors when evaluating these requests,» the company said in the blog post. «This could include whether the content is a parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.»

YouTube will have a similar removal process for its music partners when imitating an artist’s voice for AI-generated music. These applications will first be available to labels or distributors representing artists in YouTube’s “first AI music experiments” and will later expand to additional artist representatives.


YouTube has revealed its latest AI disclosure requirements, informing users when they are watching synthetic content.Youtube

But YouTube is also expanding its own use of AI in content creation and moderation.

«A clear area of ​​impact has been the identification of new forms of abuse,» the company said. “When new threats emerge, our systems have relatively little context to understand and identify them at scale. But generative AI helps us quickly expand the set of information our AI classifiers are trained on, meaning we can identify and capture this content much more quickly. Improving the speed and accuracy of our systems also allows us to reduce the amount of harmful content that human reviewers are exposed to.”

At the end of 2020, YouTube said that faced challenges when it relied more on automation in its moderation processes and that AI made more mistakes than human reviewers.

AI content on YouTube falls into the same category Community Principles which already prohibit “technically manipulated content” that misleads viewers and may harm others.

Some of the videos, particularly those covering “sensitive topics” such as “elections, ongoing conflicts and public health crises, or public officials,” will have a visible label in the video player itself.

YouTube wrote that the policy would cover an AI video that «realistically depicts an event that never happened, or content that shows someone saying or doing something they didn’t actually do.»

Artificial intelligence technology capable of creating realistic videos, sometimes called deepfakes, has rapidly improved and become more accessible in recent years, becoming available to anyone who downloads an app. Almost anyone can quickly create manipulated media, and more sophisticated methods can produce false, realistic depictions of recognizable people saying and doing things that did not actually happen. The most common use of this technology is to depict women in non-consensual pornographic videos.