How to Detect AI Generated Images and Videos Using OSINT
The video had already been shared 40,000 times before anyone questioned it.
A well-known politician appeared to be confessing to something damaging. The lighting was right. The voice matched. The facial expressions looked real enough that casual viewers didn't pause for even a second. Three journalists had embedded it in their pieces before a single analyst ran it through a basic frame-by-frame check.
It was a deepfake. An AI-generated video built in under two hours using freely available software. By the time that was confirmed, the damage was done.
This is the problem. Not that deepfakes exist — they've existed for years. The problem is that most people still don't know how to detect AI generated images and videos before they share, act on, or believe them. This guide fixes that. It walks you through the OSINT methodology that investigators and analysts actually use, from naked-eye visual checks to forensic tools that surface what humans miss.
No technical background needed. Just patience and a clear process.
Why This Is Getting Harder — And Why It Still Matters
The generation tools have gotten dramatically better. Midjourney, DALL·E, Stable Diffusion, and dozens of newer models now produce images indistinguishable from photographs at first glance. Video deepfake tools that required a GPU farm and weeks of training in 2020 now run on consumer hardware in hours.
Most people overlook this shift and assume they'll be able to tell. They won't — not reliably, not without a structured approach.
The gap between a capable analyst and an average viewer is a learnable skill, not a talent. Here's how to close it.
Step 1: Start With the Context, Not the Content
This is the step most detection guides skip, and it's the most important one.
Before you analyze a single pixel, interrogate the circumstances. Who shared this? On what platform? When? What is it trying to make you feel or believe — and does that emotional pull feel engineered?
Synthetic media rarely appears in a vacuum. It's almost always deployed to support a specific claim. Ask whether the claim is verifiable through other independent sources.
Pro Tip: In OSINT investigations, context failure alone is often sufficient to flag content as suspicious. A "leaked" video that surfaced on a brand-new account, gets amplified by coordinated activity, and supports a convenient narrative deserves scrutiny.
Step 2: Visual Inspection — What the Human Eye Can Catch
Eyes, Blinking, and Gaze
Genuine humans blink roughly 15 to 20 times per minute. Many video deepfakes either blink too infrequently or produce brief, mechanical-looking blinks that don't sync naturally with facial expression changes.
The Face-Background Boundary
Run your attention along the hairline, the jaw, and the ears. Look for slight blurring or smearing along the boundary where the synthetic face meets real hair or background — a flickering halo effect in motion.
Skin, Lighting, and Texture
Real skin has inconsistencies: pores, subtle blemishes, asymmetry. AI-generated skin tends toward being either unnaturally smooth or textured in a way that looks painted.
Audio-Visual Sync
Lip-syncing is where many deepfakes betray themselves. Watch the mouth during fast speech, especially on consonants like "p," "b," and "m" that require the lips to fully close.
Step 3: Metadata Analysis — What the File Remembers
ExifTool
ExifTool is the standard tool for metadata extraction. It returns everything embedded in the file: timestamps, GPS coordinates, camera make and model.
InVID and WeVerify
InVID WeVerify is a browser extension built for journalists and OSINT analysts. It breaks a clip into individual keyframes for reverse image search.
Step 4: Forensic Visual Analysis
FotoForensics and Forensically
FotoForensics and Forensically are free tools that apply Error Level Analysis (ELA) to images.
Hive Moderation
Hive Moderation offers a free-to-use AI content detector that returns a probability score for AI generation.
Step 5: Reverse Image Search — The Underrated Layer
Reverse image search establishing provenance: where did this image first appear, in what context, and does its history match the story being told about it?
Frequently Asked Questions
How do I detect AI generated images for free?
Start with ExifTool to check metadata, run the image through FotoForensics for ELA, and then submit it to Hive Moderation for a probability score.
What are the most common signs of a deepfake video?
Unnatural blinking patterns, blurring along the face-background boundary, lighting inconsistencies, and audio-visual sync errors on fast consonants.

Passionate OSINT investigator and cybersecurity professional with over 3 years of experience. Expertise in web penetration testing, background checks, fraud detection, and uncovering digital fingerprints. Providing verified truth in the digital shadows.
Need a
ProfessionalInvestigation?
If this case sounds familiar, I can help. Get a confidential consultation today.