Summary
- AI videos sometimes include watermarks and other warnings, but not always.
- Keep a sharp eye on object physics, and problems with details, like extra fingers or illegible text.
- Extraordinary claims require extraordinary evidence.
I’m not going to mince words, here — we’re at a dangerous moment when it comes to generative AI. While it’s often used for fun or other legitimate purposes, its fidelity is good enough now that it can used to pull the wool over our eyes, especially by groups looking to sow social or political discord. With a few text prompts and source images, they can invent controversies that never happened. We need fact-checking more than ever.
All this makes it extremely important to spot AI videos when they appear on platforms like Facebook, Telegram, and WhatsApp. If you can catch one, you won’t just be protecting yourself from disinformation — you’ll be protecting other people, since not everyone is equipped with a skeptic’s toolbox.
Related
How to screen your calls on an iPhone like a pro
There are a few simple tactics you can use to cut out the noise.
Scan for watermarks and warning labels
A helpful aid from AI makers
Meta
Some platforms have started adding watermarks to AI videos. At times, this is simply for the sake of getting users to pay more — the watermarks are stripped from downloaded Sora videos if you’re willing to pay $200 per month for ChatGPT Pro, for example. In other situations, however, companies are aware of the dangers of AI, and don’t want to be blamed if someone starts using their product to deceive the public.
Typically, watermarks come in the form of small icons found in the corner of a video. These can be manually removed, but that’s a potentially complicated and time-consuming process. Marks need to be erased from every frame, and even a slight slip-up can show the creator’s hand. Watch out for any visual distortions where a watermark would be.
Some social networks have ways of automatically detecting AI-generated content and flagging it. In Meta apps like Facebook and Instagram, look for an AI info button within a post — that signals that the content was partly or entirely generated using AI.
Watch for impossible physics
AI isn’t the miracle some people think it is
OpenAI
With some AI videos, I’m blown away that a computer could render them on its own, even though I understand the basics of what it’s doing. But for every clip that generates a “wow,” there’s another that violates basic physics. Look for people and animals clipping through each other, or rotating their limbs in ways that in real life would mean a trip to the hospital.
Sometimes, though, you have to search for less dramatic cues. A person may be jumping too high, say, or a flag might be flapping in the wind the wrong way. Lighting sources could be off. You might also notice that motion is too smooth — real humans and animals are often hesitant or uncoordinated. For that matter, there are limits to what real-world camera operators can do, even when they’re using a cinema-level drone.
Related
If you’re new to flying, here’s how to make your DJI drone shots look pro
Starting with a beginner DJI drone doesn’t mean your shots can’t look spectacular — use these tips to get the most out of your flight time.
AI often gets the small details wrong
Or sometimes, all too perfect
A fake image of the Pope created using Midjourney. Notice problems with his crucifix and right hand.
The best-known way of spotting an AI video is by looking at a person’s fingers — humans have five on each hand, obviously, barring issues like accidents or genetic mutations. Another popular tactic involves checking text, since video generators frequently misspell words or render them completely illegible.
AI tools are getting better at these things, so it’s worth expanding your horizons. Check the patterns on objects like art and clothing — it may look bad, arbitrary, or repetitive under closer scrutiny. In other cases an object might look too slick, lacking the imperfections of the real thing. Real machines get dirty. Real humans have pores, moles, and blemishes. A person with shining, perfectly smooth skin all over is probably as authentic as a Barbie doll.
Related
6 AI video generators to try while you wait for OpenAI’s Sora
Sora will revolutionize video production, removing the need for a studio. It’s not yet available, but alternative AI video generators exist.
Confirmation bias is the enemy
CNN
Disinformation tends to feed into things we already believe about a topic. If you’ve repeatedly been told that Kamala Harris is a secret communist, for example, it’s probably easier to believe her wearing a Soviet hammer and sickle, as Elon Musk depicted during the 2024 US Presidential election (via CNN). I don’t mean to single out right-wing disinformation — it’s just that if someone as prominent as Musk is willing to use it, there may be people in the shadows with even fewer scruples.
If a video suggests extraordinary claims, or panders to your deepest fears and beliefs, that’s your cue to check other sources for verification and counter-arguments. Extraordinary claims require extraordinary evidence, as Carl Sagan put it. Don’t hinge your entire worldview on a video that comes from the same place you browse funny memes and status updates. Indeed, now that fact-checking is dead at X and Meta, you should be skeptical of just about everything on their platforms that could be controversial.
Related
Is Samsung’s AI Subscription Club a vision of tech’s future?
We could be headed towards one subscription to rule all your devices.