Models / Offensive Content Detection

Offensive Content Detection

Overview

The offensive model is useful to determine if an image or a video contains offensive content or hate content. In addition to detecting such content, this model gives details on the position and the type of the offensive content found.

The offensive content detected falls broadly into the following categories:

  • Nazi era symbols
  • Supremacist symbols
  • Offensive gestures
  • Other symbols

See the concept section for the full list of detected concepts.

Confederate flag (offensive flags)
Middle finger (offensive gesture)

Concepts

This section lists the concepts detected by the Offensive Detection Model.

Nazi-era symbols

Nazi swastika

Nazi symbol that can be found on objects from the nazi era, costumes, drawings, cartoons, tattoos...

swastika

Nazi flags

Flags from the nazi era, such as the nazi party flag, the Reichskriegsflagge, the Reichsdienstflagge...

swastika

Iron cross

Military decoration of Prussia, later used in Nazi Germany. Present on nazi flags, costumes, medals and other objects.

ironcross

SS bolts

Schutzstaffel sigrunes. They symbolised victory and are present on nazi era objects such as helmets, medals, costumes...

ssbolts

Sonnenrad

Sonnenrad or sunwheel. Ancient European symbol appropriated by the Nazis.

sonnenrad

Sturmabteilung

Emblem of Hitler's SA troops. Some neo-nazis and white supremacists currently use this symbol.

sturmabteilung

Supremacist symbols

Confederate flag

Flag of the Confederate States of America

confederate

KKK

Ku klux klan

kkk

Burning cross

Cross burning is a practice associated with the Ku Klux Klan.

burningcross

Blood drop cross

Primary symbol associated to the Ku Klux Klan groups.

blooddropcross

Celtic cross

There are many types of celtic crosses, with different meanings. The one depicted is one of the most commonly used white supremacist symbols.

celticcross

Valknut

Valknut or valknot. Old Norse symbol appropriated by some white supremacists.

valknut

Gestures

Middle finger

Obscene hand gesture in western culture

middlefinger

Other symbols

ISIS flag

IS / ISIL / ISIS / Daesh version of the Black Standard

isis

Principles

The Offensive detection does not use any image meta-data to determine the presence of a offensive content on an image. The file extension, the meta-data or the name will not influence the result. The classification is made using only the pixel content of the image or video.

On most sites and apps, images containing offensive content will be systematically removed.

Offensive Detection works with black and white images as well as color images or images with filters.

Use-cases

  • Block or detect users who submit images or videos containing offensive content
  • Hide, Blur or Filter hateful symbols and references in images and videos
  • Protect your users from unwanted content

Limitations

  • Elements smaller than 5% of the width or height of the image may not be detected.

Recommended threshold

When processing the "offensive" value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing offensive content while images or videos with a value below will be considered to be safe.

Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.

If you want to reduce false negatives, you may want to start with a threshold of 0.5 (meaning that images with an "alcohol" value above 0.5 would be flagged)

If you want to reduce false positives, you may want to start with a threshold of 0.8

If there are some concepts that you want to allow, or if you want to set different thresholds for different concepts, this is also something you can do, as the API will return a score for each object found.

Details

Along with the offensive probability, the model will return a list of all the offensive elements found in the image (if any), along with their positions and type.

Use the model

If you haven't already, create an account to get your own API keys.

Detect offensive content in an image

Let's say you want to moderate the following image:

You can send the image by pointing to a public URL or uploading the byte content of the image.


curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
    -d 'models=offensive' \
    -d 'api_user={api_user}&api_secret={api_secret}' \
    --data-urlencode 'url=https://sightengine.com/assets/img/doc/offensive/offensive1.jpg'


# this example uses requests
import requests
import json

params = {
  'url': 'https://sightengine.com/assets/img/doc/offensive/offensive1.jpg',
  'models': 'offensive',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
}
r = requests.get('https://api.sightengine.com/1.0/check.json', params=params)

output = json.loads(r.text)


$params = array(
  'url' =>  'https://sightengine.com/assets/img/doc/offensive/offensive1.jpg',
  'models' => 'offensive',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',
);

// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json?'.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);

$output = json_decode($response, true);


// this example uses axios
const axios = require('axios');

axios.get('https://api.sightengine.com/1.0/check.json', {
  params: {
    'url': 'https://sightengine.com/assets/img/doc/offensive/offensive1.jpg',
    'models': 'offensive',
    'api_user': '{api_user}',
    'api_secret': '{api_secret}',
  }
})
.then(function (response) {
  // on success: handle response
  console.log(response.data);
})
.catch(function (error) {
  // handle error
  if (error.response) console.log(error.response.data);
  else console.log(error.message);
});

The API will then return a JSON response:

            
            
{
    "status": "success",
    "request": {
        "id": "req_24DNGegGf1Mo0n4rpaRwZ",
        "timestamp": 1512898748.652,
        "operations": 1
    },
    "offensive": {
      "prob": 0.8744,
      "boxes": [
          {
              "x1": 0.0,
              "y1": 0.021,
              "x2": 0.847,
              "y2": 0.907,
              "label": "confederate",
              "prob": 0.5872
          },
          {
              "x1": 0.583,
              "y1": 0.361,
              "x2": 0.631,
              "y2": 0.422,
              "label": "confederate",
              "prob": 0.6496
          },
          {
              "x1": 0.686,
              "y1": 0.8,
              "x2": 0.772,
              "y2": 0.922,
              "label": "blooddropcross",
              "prob": 0.8744
          }
      ]
    },
    "media": {
        "id": "med_24DNJfN2BlCGPGQBoZ5dO",
        "uri": "https://sightengine.com/assets/img/doc/offensive/offensive1.jpg"
    }
}
            
        

Any other needs?

See our full list of models for details on other filters and checks you can run on your images and videos.

Did you find this page helpful?

We're always looking for advice to help improve our documentation!

Let us know what you think

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

OK