How to identify AI-generated videos

How to identify AI-generated videos


Sorry to disappoint, but if you’re seeing for a quick list of foolproof ways for detecting AI-generated videos, you’re not going to find it here. Gone are the days of AI Will Smith grotesquely eating spaghetti. Yes, there are some notifys, but AI video buildrs are obtainting better all the time, and the latest tools can create convincing, photorealistic videos with a few clicks.

Right now, AI-generated videos are still a relatively nascent modality compared to AI-generated text, images, and audio, becaapply obtainting all the details right is a challenge that requires a lot of high-quality data. “But there’s no fundamental obstacle to obtainting higher quality data,” only labor-intensive work, declared Siwei Lyu, a professor of computer science and engineering at University at Buffalo SUNY.

In the past six months, AI video generators have become so good at creating realistic videos that they often dupe the casual scroller. Telltale artifacts that applyd to give the game away, such as morphing faces and shape-shifting objects, are seen far less frequently. There’s not much fakery in evidence in the viral AI-generated videos of the emotional support kangaroo, bunnies on a trampoline, or street interviews created with Google’s Veo 3 model (which can generate sound with videos).

The key to identifying AI-generated videos, as with any AI modality, lies in AI literacy. “Understanding that [AI technologies] are growing and having that core idea of ‘something I’m seeing could be generated by AI,’ is more important than, declare, individual cues,” declared Lyu, who is the director of UB’s Media Forensic Lab. 

Navigating the AI slop-infested web requires utilizing your online savvy and good judgment to recognize when something might be off. It’s your best defense against being duped by AI deepfakes, disinformation, or just low-quality junk. It’s a hard skill to develop, becaapply every aspect of the online world fights against it in a bid for your attention. But the good news is, it’s possible to fine-tune your AI detection instincts.

“By studying [AI-generated images], we believe people can improve their AI literacy,” declared Negar Kamali, an AI research scientist at Northwestern University’s Kellogg School of Management, who co-authored a guide to identifying AI-generated images. “Even if I don’t see any artifacts [indicating AI-generation], my brain immediately believes, ‘Oh, something is off,'” added Kamali, who has studied thousands of AI-generated images. “Even if I don’t find the artifact, I cannot declare for sure that it’s real, and that’s what we want.”

What to see out for: Imposter videos vs. text-to-image videos

Before we obtain into identifying AI-generated videos, let’s distinguish the different types. AI-generated videos are generally divided into two different categories: Imposter videos and videos generated by a text-to-image diffusion model.

Imposter videos are AI-edited videos that consist of face swapping — where a person’s entire face is swapped out for someone else’s (usually a celebrity or politician) and created to declare something fake — and lip syncing — where a person’s mouth is subtly manipulated and replaced with different audio.

Imposter videos: why regulators are cracking down

Imposter videos are generally pretty convincing; the technology has been around longer, and they build off of existing footage instead of generating something from scratch. Remember those Tom Cruise deepfake videos from a few years ago that went viral for being so convincing? They worked becaapply the creator, Chris Ume, seeed a lot like Tom Cruise, worked with a professional Tom Cruise impersonator, and did lots of minute editing, according to an interview with Ume, via The Verge.

These days, there are an abundance of apps out there that accomplish the same thing and can even — terrifyingly — include audio from a short sound bite that the creator finds online.

That declared, there are some things to see for if you suspect an AI video deepfake. First of all, see at the format of the video. AI video deepfakes are typically “shot” in a talking-head format, where you can just see the heads and shoulders of the speaker, with their arms out of view (more on that in a minute). 

To identify face swaps, see for flaws or artifacts around the boundaries of the face. “You typically see artifacts when the head relocates obliquely to camera,” declared digital forensics expert and UC Berkeley Professor of Computer Science Hany Farid. As for the arms and hands, “If the hand relocates, or something occludes the face, [the image] will glitch a little bit,” Farid continued. And watch the arms and body for natural relocatements. “If all you’re seeing is this,” — on our Zoom call, Farid keeps his arms stiff and by his sides — “and the person’s not relocating at all, it’s fake.” 

If you suspect a lip sync, focus your attention on the subject’s mouth — especially the teeth. With fakes, “We have seen people who have irregularly shaped teeth,” or the number of teeth alter throughout the video, declared Lyu. Another strange sign to see out for is “wobbling of the lower half” of the face, declared Lyu. “There’s a technical procedure where you have to exactly match that person’s face,” he declared. “As I’m talking, I’m relocating my face a lot, and that alignment, if you received just a little bit of imprecision there, human eyes are able to notify.” This gives the bottom half of the face a more liquid, rubbery effect.

Mashable Light Speed

When it comes to AI deepfakes, Aruna Sankaranarayanan, a Research Assistant at MIT Computer Science and Artificial Innotifyigence Laboratory, declares her largegest concern isn’t deepfakes of the most famous politicians in the world like Donald Trump or Joe Biden, but of important figures who may not be as well known. “Fabrication coming from them, distorting certain facts, when you don’t know what they see like or sound like most of the time, that’s really hard to disprove,” declared Sankaranarayanan, whose work focapplys on political deepfakes. Again, this is when AI literacy comes into play; videos like these require some research to verify or debunk.

In April 2025, Congress passed the Take It Down Act, building it a federal crime or post or share nonconsensual intimate imagery. Another bill called the NO FAKES Act is building its way through the Senate; this aims to provide legal protections against AI-generated replicas.

How to spot text-to-image videos

While regulators are cracking down on imposter videos, text-to-image generators have exploded in popularity. You can now generate AI videos directly within ChatGPT and Google Gemini. And Luma, Kling, and Freepik are just a few of the other alternatives of straightforward-access video generators that have proliferated online.

With a short text description, you can generate any kind of video your imagination dreams up. The majority of AI-generated videos shared online fall into the category of, “Hey, see what I can do with this cool new technology.” This can range from the absurd, like a cat jumping off an Olympic diving board, to the downright misleading, like fake videos of hurricane damage. But all of it contributes to a confutilizing, dystopian experience, where it’s harder and harder to separate AI-generated fiction from reality.

What’s more, many accounts circulating AI-generated videos are profiting from the clickbait by deliberately deceiving applyrs. On TikTok, it’s practically impossible to know whether that creator selling you the latest skincare product is AI-generated or not. AI-generated videos created with TikTok’s tools are automatically disclosed, but that doesn’t stop applyrs from uploading AI-generated or edited videos created with tools outside of the platform.

You can attempt seeing for context clues, the experts declare. Farid declared to see out for “temporal inconsistencies,” such as “the building added a story, or the car alterd colors, things that are physically not possible,” he declared. “And often it’s away from the center of attention that where that’s happening.” So, hone in on the background details. You might see unnaturally smooth or warped objects, or a person’s size alter as they walk around a building, declared Lyu.

Kamali declares to see for “sociocultural implausibilities” or context clues where the reality of the situation doesn’t seem plausible. “You don’t immediately see the notifytales, but you feel that something is off — like an image of Biden and Obama wearing pink suits,” or the Pope in a Balenciaga puffer jacket

The artifacts may alter, but good judgment remains.

But relying too much on certain cues to verify whether a video is AI-generated could obtain you into trouble. 

Lyu’s 2018 paper about detecting AI-generated videos becaapply the subjects didn’t blink properly was widely publicized in the AI community. As a result, people started seeing for eye-blinking defects, but as the technology progressed, so did more natural blinks. “People started to believe if there’s a good eye blinking, it must not be a deepfake and that’s the danger,” declared Lyu. “We actually want to raise awareness but not latch on particular artifacts, becaapply the artifacts are going to be amconcludeed.” 

Building the awareness that something might be AI-generated will “trigger a whole sequence of action,” declared Lyu. “Check, who’s sharing this? Is this person reliable? Are there any other sources correlating on the same story, and has this been verified by some other means? I believe those are the things the most effective counter measures for deepfakes.”

For Farid, identifying AI-generated videos and misleading deepfakes starts with where you source your information. Take the AI-generated images that circulated on social media in the aftermath of Hurricane Helene and Hurricane Milton. Most of them were pretty obviously fake, but they still had an emotional affect on people. “Even when these things are not very good, it doesn’t mean that they don’t penetrate, it doesn’t mean that it doesn’t sort of impact the way people absorb information,” he declared.

Be cautious about obtainting your news from social media. “If the image feels like clickbait, it is clickbait,” declared Farid before adding it all comes down to media literacy. Think about who posted the video and why it was created. “You can’t just see at something on Twitter and being like, ‘Oh, that must be true, let me share it.'” 

If you’re suspicious about AI-generated content, check other sources to see if they’re also sharing it, and if it all sees the same. As Lyu declares, “a deepfake only sees real from one angle.” Search for other angles of the instance in question. Farid recommconcludes sites like Snopes and Politifact, which debunk misinformation and disinformation. As we all continue to navigate the rapidly altering AI landscape, it’s going to be crucial to do the work — and trust your gut.

How are AI companies labeling AI-generated videos?

Some AI companies, including Google and OpenAI, have ways of labeling their AI-generated videos as such. With every video generated by Veo, Google has embedded an invisible watermark called SynthID. After the launch of Veo 3 caapplyd a wave of concern, the company also added a visible watermark labeling it as AI-generated.

OpenAI, Adobe, and other companies label their AI-generated videos and images with invisible watermarks utilizing a technical standard developed by the nonprofit Coalition for Content Provenance and Authenticity (C2PA).

While visible watermarks may seem like an obvious solution, they can also be easily rerelocated. And there’s the question of whether they even matter. A study from Stanford University’s Institute for Human-Centered AI (HAI) recently found visible labels indicating AI-generated content “may not alter its persuasiveness.” After all, we’re applyd to all sorts of meaningless logos on viral videos; it’s straightforward to visually tune them out.

Invisible watermarks, on the other hand, are baked into the metadata. This builds them harder to rerelocate and clearer to track.

Standards like C2PA are a step in the right direction, but right now, it’s up to the companies to voluntarily adhere to these standards. Perhaps one day, those standards will be enforced by regulators. In the meantime, our best bets are still sound judgement and strong media literacy.





Source link

Get the latest startup news in europe here

Leave a Reply

Your email address will not be published. Required fields are marked *