YouTube's Latest Experiment Is A Great Example Of How Not To Use AI
While ChatGPT, Gemini, and other generative AI products have their uses, some companies are going overboard. Beyond issues like hallucinations or AI screwing up — like deleting an entire code database because it "panicked" — there are also concerns about how AI is being used without the knowledge or permission of users. YouTube has now given us a perfect example of how that could happen.
In one of the platform's most recent experiments, YouTube started making small edits to some videos without alerting the creator first. While the changes weren't made by generative AI, they did rely on machine learning. For the most part, it looks like the reported changes have added definition to things like wrinkles, as well as adding clearer skin and sharper edges on some videos.
While YouTube has implemented useful AI tools in the past in the past, such as helping creators come up with video ideas, these most recent changes are part of a larger issue: they're being made without user consent.
Why consent matters so much
We live in a world where AI is becoming increasingly unavoidable due to a lack of regulation. That is unlikely to change anytime soon, as officials like President Trump continue to push for an AI action plan that helps companies invest in AI and expand on it as quickly as possible. Therefore, it's up to these companies to prioritize seeking consent from users when implementing AI.
According to a report by BBC, some YouTubers are more concerned than others — for instance, YouTuber Rhett Shull made an entire video bringing attention to YouTube's AI experiment. YouTube addressed the experiment as of a few days ago, with YouTube creator liaison Rene Ritchie noting on X that this isn't the result of generative AI. Instead, machine learning is being used to "unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)."
YouTube has a great deal of control over all of the content that users upload. That's not the issue. The issue is the fact that YouTube has been doing this without the consent of the user, because it also means that those videos are being treated as training material for the machine learning processes. And that's always been a problem with AI development.
Machine learning is still AI
Generative AI is certainly the talk of the industry right now, but machine learning is still AI. There is still an algorithm behind the scenes doing all of the heavy lifting, and it's working off of material it has been trained with. YouTube can equate machine learning to being the same thing that your smartphone camera does, but the difference here is that you know your phone is doing that. YouTube even didn't reveal the existence of this experiment until someone started complaining about it.
That's not the right way to handle AI, especially since it is far from perfect. Machine learning may not suffer from the same pitfalls as generative AI, but just because we don't have to worry about YouTube feeding us bogus AI-created crime alerts like some other apps doesn't make this any less of an invasive move by the company to continue implementing AI everywhere it can.
YouTube hasn't shared when the experiment will end or if there will eventually be a wider rollout. That said, if you're watching YouTube Shorts and you notice that the videos look a little weird and strangely upscaled, then it's probably because YouTube has started editing those videos to try to make them better in some way, even if it is making some people angry.