Skip to main content

If you’re a regular YouTube viewer, you’ve probably noticed the growing presence of AI over the past year. From AI-generated thumbnails to full-fledged AI-generated voiceovers and videos, artificial intelligence has quickly become part of the platform’s landscape.

In response, YouTube has stepped up to protect creators with new tools. The platform’s notorious Content ID system—known for triggering demonetization when creators unintentionally include copyrighted music—will soon include AI detection capabilities. Specifically, YouTube plans to use Content ID to detect AI-generated voices that mimic existing artists. This AI-hunting tool is still in development with the platform’s partners, with a broader rollout expected by 2025.

AI-generated imagery and videos are also in YouTube’s sights. The platform has revealed that it is actively developing technology to identify and manage AI-generated visuals, such as deepfake videos of real people. While there’s no official timeline for when this feature will be available, it’s clear that YouTube is taking the threat of AI impersonation seriously.

In addition to content impersonation, YouTube is tackling the issue of AI scraping, where publicly available videos are used to train AI models. This practice, exemplified by Nvidia’s reported use of YouTube content for model training, may violate YouTube’s terms of service. With video generation rapidly evolving, YouTube and Google are working on systems to prevent unauthorized data collection, but details on how they will enforce this remain scarce.

As AI technology advances, creators worry about their likeness being stolen or replicated. While YouTube’s response includes bolstering its systems to detect unauthorized access, questions remain about how effective these measures will be in protecting creators.

Interestingly, YouTube’s terms of service don’t prohibit the company itself or its parent company, Google, from using uploaded videos for AI training. Although creators are now required to disclose the use of AI in their videos, a recent report indicated that YouTube allowed OpenAI to scrape its content without a legal challenge to avoid creating a precedent that could affect Google’s own AI ambitions.