Products

SIGN UPLOG IN

Models / Scammer Detection

Scammer Detection

Table of contents

Overview

The Scammer Detection Model is useful to help detect and block common frauds on social networks and dating sites. The so-called scammers are romance scammers, military scammers or other types of scammers that trick people into sending them money.

Over the years we have built a reference database of tens of thousands of profiles and images from all over the world known to be used by scammers.

Example: profile frequently used by scammers

Principles

The Scammer Recognition works by first finding relevant faces within an image or a video frame and then comparing the face characteristics to our list of known profiles used by scammers.

The Face Recognition we apply to this end works in a way that makes it robust and able to recognize scammer profiles in previously unknown images. The API works for color images as well as black-and-white, and will work from different angles, in different lighting conditions.

Scammer probability

The returned value is between 0 and 1, images with a scam probability value closer to 1 will have a scammer while images with a scam probability value closer to 0 will not have a scammer.

Use-cases

  • Require that users submit or upload real photos only, rather than fake images
  • Detect malicious users

Recommended thresholds

When processing the "scam probability" value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing a scammer while images or videos with a value below will be considered to be safe.

Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.

  • If you want to reduce false negatives, you may want to start with a threshold of 0.2 (meaning that images with a "scam probability" value above 0.2 would be flagged)
  • If you want to reduce false positives, you may want to start with a threshold of 0.5
Profile known to be used by scammers: (scam probability 0.9999)
Profile not known to be used by scammers: (scam prob value 0.0137)

Use the model

If you haven't already, create an account to get your own API keys.

Detect a scammer

Let's say you want to moderate the following image:

You can either upload a public URL to the image, or upload the raw binary image. Here's how to proceed if you choose to share the image's public URL:


curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
    -d 'models=scam' \
    -d 'api_user={api_user}&api_secret={api_secret}' \
    --data-urlencode 'url=https://sightengine.com/assets/img/examples/example-scam1-1000.jpg'


# this example uses requests
import requests
import json

params = {
  'url': 'https://sightengine.com/assets/img/examples/example-scam1-1000.jpg',
  'models': 'scam',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
}
r = requests.get('https://api.sightengine.com/1.0/check.json', params=params)

output = json.loads(r.text)


$params = array(
  'url' =>  'https://sightengine.com/assets/img/examples/example-scam1-1000.jpg',
  'models' => 'scam',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',
);

// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json?'.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);

$output = json_decode($response, true);


// this example uses axios
const axios = require('axios');

axios.get('https://api.sightengine.com/1.0/check.json', {
  params: {
    'url': 'https://sightengine.com/assets/img/examples/example-scam1-1000.jpg',
    'models': 'scam',
    'api_user': '{api_user}',
    'api_secret': '{api_secret}',
  }
})
.then(function (response) {
  // on success: handle response
  console.log(response.data);
})
.catch(function (error) {
  // handle error
  if (error.response) console.log(error.response.data);
  else console.log(error.message);
});

The API will then return a JSON response:

                    
                    
{
    "status": "success",
    "request": {
        "id": "req_0RNttFqUKWhGSAlcENj3Z",
        "timestamp": 1495636774.8524,
        "operations": 1
    },
    "scam": {
        "prob": 0.9895
    },
    "faces": [
        {
            "x1": 0.4186,
            "y1": 0.2417,
            "x2": 0.6152,
            "y2": 0.6698,
            "features": {
                "left_eye": {
                    "x": 0.5326,
                    "y": 0.3969
                },
                "right_eye": {
                    "x": 0.4512,
                    "y": 0.4219
                },
                "nose_tip": {
                    "x": 0.4805,
                    "y": 0.525
                },
                "left_mouth_corner": {
                    "x": 0.5501,
                    "y": 0.5573
                },
                "right_mouth_corner": {
                    "x": 0.4811,
                    "y": 0.5813
                }
            }
        }
    ],
    "media": {
        "id": "med_0RNtVu0azEaBwPgZI0fur",
        "uri": "https://sightengine.com/assets/img/examples/example-scam1-1000.jpg"
    }
}
                    
                

Any other needs?

See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...

Was this page helpful?