AI-Generated Video Detection
genaiDetect if a video was generated with an AI model such as Sora, Veo, Runway, Pika and more.
Overview
The AI-Generated Video Detection Model can help you determine if a video was entirely generated by an AI model, or if it is a real video. This model was trained on millions of artificially-created and human-created videos spanning all sorts of content such as real life, art, cartoons and more.
The Model works by analyzing the visual (pixel) content of the video. No meta-data is used in the analysis. Tampering with meta-data such as EXIF data therefore has no effect on the scoring.
The Model was trained to detect videos created by the main generators currently in use: Veo, Sora, Runway, Pika, MidJourney, Kling... Additional generators will be added over time as they become available.
Use cases
- Tag AI-generated videos as such, to limit the spread of misinformation and fake news
- Implement stricter moderation rules on AI-generated videos
- Detect potential fraud with fake video verifications
- Limit AI-generated spam
- Enact bans on AI-generated videos
Related model
For AI-image detection, you can use the AI-image detection model.
Generator-specific information
Sightengine's AI detection models compute per-generator confidence scores alongside a global AI probability score. For every image or video analyzed, the API response includes individual scores for each supported generator, giving you a complete fingerprint of the content.
The list of supported generators spans both images and videos, covering major commercial tools, open-source models, and older GAN-based architectures:
Video generators
| Generator | Creator | Example versions detected |
| Higgsfield | Higgsfield AI | Higgsfield 1.0, Higgsfield Soul Cinema... |
| Kling | Kuaishou | Kling 1.0, Kling 1.5, ... |
| Midjourney | Midjourney | Midjourney Video, ... |
| Pika | Pika | Pika 1.0, Pika 1.5, ... |
| Runway | Runway | Gen-2, Gen-3, Gen-4... |
| Seedance | ByteDance | Seedance 1.5, Seedance 2.0, ... |
| Sora | OpenAI | Sora, Sora 2, ... |
| Veo | Veo 1, Veo 2, Veo 3, ... | |
| Wan | Alibaba | Wan 2.1, Wan 2.2, ... |
| Other generators | Various | Demamba, HotShot, LaVie, Hunyuan, Ray... |
And more, new generators are added continuously as they appear in the wild.
Image generators
| Generator | Creator | Example versions detected |
| DALL-E | OpenAI | DALL-E 2, DALL-E 3, ... |
| Firefly | Adobe | Firefly 2, Firefly 3, ... |
| Flux | Black Forest Labs | Flux.1 Dev, Flux.1 Schnell, Flux Pro, ... |
| GPT image generation | OpenAI | GPT-4o, GPT-1.5 image... |
| Grok Imagine | xAI | Imagine, Imagine Pro... |
| Higgsfield | Higgsfield AI | Higgsfield Soul... |
| Ideogram | Ideogram | Ideogram 2.0, Ideogram 3.0, ... |
| Imagen | Imagen 2, Imagen 3, ... | |
| Kling | Kuaishou | Kling 2.0, Kling 3.0, ... |
| Midjourney | Midjourney | Midjourney v5, v6, v7, ... |
| Nano Banana | Nano Banana 2, Nano Banana Pro, ... | |
| Qwen | Alibaba | Qwen2-VL, ... |
| Recraft | Recraft | Recraft V3, ... |
| Reve | Reve | Reve Image 1.0, ... |
| Seedream | ByteDance | Seedream 2.0, Seedream 3.0, ... |
| Stable Diffusion | Stability AI | SD 1.5, SD 2.1, SDXL, SD3, ... |
| StyleGAN | NVIDIA | StyleGAN2, StyleGAN3, ... |
| Z-image | Alibaba | Z-image, Z-image Turbo, ... |
| Other generators | Various | Generators with a smaller audience |
And more, new generators are added continuously as they appear in the wild.
Use the model
If you haven't already, create an account to get your own API keys.
Detect if a video was AI-generated
Option 1: Short video
Here's how to proceed to analyze a short video (less than 1 minute):
curl -X POST 'https://api.sightengine.com/1.0/video/check-sync.json' \
-F 'media=@/path/to/video.mp4' \
-F 'models=genai' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
# specify the models you want to apply
'models': 'genai',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check-sync.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/video.mp4'),
// specify the models you want to apply
'models' => 'genai',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check-sync.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'genai');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/video/check-sync.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
| Parameter | Type | Description |
| media | file | image to analyze |
| models | string | comma-separated list of models to apply |
| interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
| api_user | string | your API user id |
| api_secret | string | your API secret |
Option 2: Long video
Here's how to proceed to analyze a long video. Note that if the video file is very large, you might first need to upload it through the Upload API.
curl -X POST 'https://api.sightengine.com/1.0/video/check.json' \
-F 'media=@/path/to/video.mp4' \
-F 'models=genai' \
-F 'callback_url=https://yourcallback/path' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
# specify the models you want to apply
'models': 'genai',
# specify where you want to receive result callbacks
'callback_url': 'https://yourcallback/path',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/video.mp4'),
// specify the models you want to apply
'models' => 'genai',
// specify where you want to receive result callbacks
'callback_url' => 'https://yourcallback/path',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'genai');
// specify where you want to receive result callbacks
data.append('callback_url', 'https://yourcallback/path');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/video/check.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
| Parameter | Type | Description |
| media | file | image to analyze |
| callback_url | string | callback URL to receive moderation updates (optional) |
| models | string | comma-separated list of models to apply |
| interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
| api_user | string | your API user id |
| api_secret | string | your API secret |
Option 3: Live-stream
Here's how to proceed to analyze a live-stream:
curl -X GET -G 'https://api.sightengine.com/1.0/video/check.json' \
--data-urlencode 'stream_url=https://domain.tld/path/video.m3u8' \
-d 'models=genai' \
-d 'callback_url=https://your.callback.url/path' \
-d 'api_user={api_user}' \
-d 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
'stream_url': 'https://domain.tld/path/video.m3u8',
# specify the models you want to apply
'models': 'genai',
# specify where you want to receive result callbacks
'callback_url': 'https://your.callback.url/path',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
r = requests.post('https://api.sightengine.com/1.0/video/check.json', data=params)
output = json.loads(r.text)
$params = array(
'stream_url' => 'https://domain.tld/path/video.m3u8',
// specify the models you want to apply
'models' => 'genai',
// specify where you want to receive result callbacks
'callback_url' => 'https://your.callback.url/path',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('stream_url', 'https://domain.tld/path/video.m3u8');
// specify the models you want to apply
data.append('models', 'genai');
// specify where you want to receive result callbacks
data.append('callback_url', 'https://your.callback.url/path');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/video/check.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
| Parameter | Type | Description |
| stream_url | string | URL of the video stream |
| callback_url | string | callback URL to receive moderation updates (optional) |
| models | string | comma-separated list of models to apply |
| interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
| api_user | string | your API user id |
| api_secret | string | your API secret |
Moderation result
The Moderation result will be provided either directly in the request response (for sync calls, see below) or through the callback URL your provided (for async calls).
Here is the structure of the JSON response with moderation results for each analyzed frame under the data.frames array:
{
"status": "success",
"request": {
"id": "req_gmgHNy8oP6nvXYaJVLq9n",
"timestamp": 1717159864.348989,
"operations": 40
},
"data": {
"frames": [
{
"info": {
"id": "med_gmgHcUOwe41rWmqwPhVNU_1",
"position": 0
},
"type": {
"ai_generated": 0.99,
}
},
...
]
},
"media": {
"id": "med_gmgHcUOwe41rWmqwPhVNU",
"uri": "yourfile.mp4"
},
}
You can use the classes under the type object to detect AI-generated parts in the video.
Frequently asked questions
Which AI video generators are supported?
The model targets all current video generators, including Sora, Veo, Runway, Pika, Kling, Higgsfield, Seedance, Wan and Midjourney Video, along with smaller and emerging generators such as Hunyuan, LaVie, Ray and HotShot etc. The model is updated on an ongoing basis as new generators become available. See Supported AI generators.
How does detection work without metadata or a watermark?
Detection is purely pixel-based. Metadata and invisible watermarks are ignored, so stripping them has no effect on the result.
What does the ai_generated score mean?
It is the model's confidence, from 0 to 1, that the analyzed frame was produced by a generative AI video model. Higher means more likely AI-generated. Scores above 0.5 typically indicate an AI-generated frame; tune the threshold to your precision/recall preference.
How is the analysis performed across a video?
The API samples return scores at a frame-level along a configurable interval in order to give you per-segment scores. You can aggregate them (for example by taking the maximum or average) to derive a single video-level decision, or inspect individual frames to locate AI-generated segments within a longer clip.
Will real videos with editing, filters or VFX be flagged?
No. Standard post-production such as color grading, stabilization, transitions, overlays or speed changes is treated as original footage. The model targets fully synthetic generated content, not edited real footage.
Does it work on compressed, re-encoded or socially-shared videos?
Yes. Detection is robust to re-encoding, resizing, frame-rate changes and standard social-platform recompression. Confidence may drop somewhat on heavily degraded clips, but the model is specifically developed to handle real-world redistribution artifacts.
Can it tell which generator produced the video?
Yes. Per-generator confidence scores are returned alongside the overall ai_generated score, giving you a fingerprint of the suspected source. For access to finer-grained analysis, contact us.
Is image detection also available?
Yes. A dedicated model targets AI-generated still images. See AI-Generated Image Detection.
Can I call this model together with other Sightengine models?
Yes. Pass a comma-separated list in the models parameter: models=genai,nudity-2.1 and the API will return all results in a single response. This is the recommended pattern for production pipelines.